A repository is just a centralized place to store your kjb and ktrs. If you have figured out a mechanism to make your real files available on the same disk as Pentaho Data Integration, then really all you need to do is use 'cron' to run ./kitchen.sh or ./pan.sh to run a job or transform at a specific time.
I would suggest a common layout for your ETL to make everything more deterministic.
project_named_directory/content <- all kjb and ktrs go here
project_named_directory/input <- all manner of flat file input placed here
project_named_directory/output <- If your ETL emits files, place them here.
project_named_directory/environment <- Any properties files or standard connection setting stuff put here to keep your PDI clean. Just read in the properties and let your JDBC connections use those variables. That will save you the hassle of mucking up your PDI /simple-jndi stuff, /home/pentaho/.kettle/kettle.properties or /home/pentaho/.kettle/shared.xml
A cron example running a job at 1 AM every day.
#minute (0-59)
# hour(0-23)
# day of the month (1-31)
# month of the year (1-12)
# day of the week (0-6 with 0=Sunday)
# commands
### Budgeted Census
0 1 * * * cd /opt/pentaho/pdi/latest/data-integration; ./kitchen.sh -file=/opt/pentaho/ETL/Build\ Budgeted\ Census\ Data/content/build_budgeted_census_data.kjb;