How to deploy a Publ website using Heroku
Heroku is probably the easiest environment to configure for Publ, especially for smaller websites. However, it is primarily intended for experimenting with Publ. Heroku comes with a number of limitations:
- Your git deployment size must be under 1 GB
- Your slug size must be under the 500MB Heroku limit
- SQLite databases will not persist across site deployments, requiring a full reindex every time your site changes
- You can use a Postgres database instead but this causes a (very) slight performance hit on page loads
That said, Heroku is a great platform for running a smaller Publ site. Additionally, on the higher tiers you get some nice features like automatic load-balancing and staged deployments. Which is probably overkill for the sites you’re running Publ for, but it’s still nice to have.
You’ll need a Heroku account, of course, and you’ll want to go through their Python introduction to get your local environment configured.
This also assumes you are using a git repository to manage your website files.
Setting up your site files
The easiest way to configure Heroku is using
gunicorn. Assuming you have a local test version of your site on your computer, do the following:
This will install all of the deployment requirements for Heroku, and configure them in your
Pipfile.lock. Check these files into your git repository.
At this point you should be able to run it locally with:
and connecting to the URL gunicorn tells you (likely
Next, you’ll need a
Procfile which tells Heroku how to launch your site:
Check this file in as well.
Setting up Heroku
Now you should be ready to deploy! You’ll need to create your Heroku app and add the git remote to your site’s git repository with e.g.:
NAME_OF_APP with whatever name
heroku create gave you.
Deploying to Heroku is now as simple as:
You may also want to run
heroku logs --tail to watch its progress.
As mentioned above, most of the website startup time is taken up by the initial content index. Since Heroku does not persist your filesystem, you might want to consider using a Postgres database to store the index persistently. Note that this will slow your site down a little bit (since accessing an in-process SQLite database is faster than going over the network to talk to Postgres), but it reduces the amount of site downtime during a content update so that might be a worthwhile tradeoff depending on your needs.
This slowdown can also be mitigated by increasing your cache timeout; since the site will be redeployed whenever content updates, the cache timeout really only affects how soon scheduled posts will appear after they are set to go live.
To provision a Postgres database at the free tier, from your local git repository, type the following:
This creates a database and then exports it to your environment as
DATABASE_URL. Unfortunately, PonyORM does not quite directly support database connection URLs, so we need to do a bit more work on our end. Namely, in
app.py, change the
database_config part to the following:
A full example can be seen in the
app.py for this site’s repository.
You will also need to install the
psycopg2 library; this is simply:
If you get an error about it failing to install (accompanied with a gigantic log) you might try pinning the version:
For local testing (to make sure everything is wired up correctly) you can do:
although be advised that the database scan will be much slower than in production.