Working with WordPress can often be time-consuming, particularly when managing multiple environments like local, staging, and production. In our team at DEside, we used to follow a traditional workflow:
- Install WordPress locally using XAMPP, MAMP, or another local server.
- Develop and test the site locally.
- Build themes, plugins, and other assets locally.
- Create a backup (we used to do this manually by exporting the WordPress folder and the SQL database, but recently we switched to Akeeba Backup)
- Deploy the site to a staging server
- Test on the staging server, often with manual user tests by team members.
- Share the staging site with the client for feedback.
The Problem with Traditional Workflows
Typically, after the client reviews the staging site, they request changes. At this point, you face two options:
- Make the changes locally and go through the entire deployment process again. This is time-consuming and can become frustrating.
- Make the changes directly on the staging server, which requires rebuilding and uploading files for every change — an equally time-consuming process.
Both approaches lead to inefficiencies. For example, in our projects, even with optimizations like switching to Vite, building could take about 5 seconds, and uploading via FTP another 30 seconds. This adds up quickly with frequent changes. Moreover, not keeping local and staging environments in sync leads to further complications, ultimately degrading Developer Experience (DevX).
Rethinking the Workflow
Why do we typically work locally before deploying to staging? The main reason is to minimize idle time between uploads and builds. Working offline is generally not a concern since most of us are usually connected to a network anyway.
So, why not connect the local and the staging environments to the same database? Since both are used solely for development, sharing a database could save considerable time by keeping the environments in sync. While this sacrifices the ability to work offline, the trade-off may be worth it considering the time saved.
A Basic Solution: Modifying wp-config.php
One simple solution is to connect your local WordPress installation to the remote database by editing the wp-config.php
file. You can modify parameters such as DB_HOST
, DB_NAME
, DB_USER
, and DB_PASSWORD
, and potentially adjust the $table_prefix
. Additionally, you should redefine the WP_HOME
and WP_SITEURL
constants in wp-config.php
. After making these changes, restart your server to apply them.
While this works, it has limitations, as you need to configure each new local environment manually.
A Better Solution: Automating with Docker
A more efficient solution is to automate the entire process using Docker. I’ve created a simple configuration available on my GitHub repo that automates these tasks.
The repository contains:
docker-compose.yml
: Defines the Docker containerschange-url.sh
: A custom script to modify wp-config.php for setting WP_HOME, WP_SITEURL, and addressing the media uploads issue..env.sample
: Defines necessary environment variables.
Benefits of This Approach
- Automation: The Docker setup automatically handles modifications to the
wp-config.php
file, reducing manual effort. - Easier Environment Management: Instead of setting up a new local environment with a fresh WordPress installation every time, Docker allows you to streamline this process significantly. By simply adding a new service in the docker-compose.yml file and creating a corresponding .env file, you can quickly spin up new environments without the hassle of recreating servers from scratch. This not only saves time but also ensures consistency across different environments.
Code Implementation
Let’s dive into the implementation details.
The docker-compose.yml
file is structured as follows:
This configuration pulls the latest WordPress image, sets up the necessary environment variables, and runs the change-url.sh
script, which modifies the wp-config.php
file as needed.
The .env.sample
file should be configured with the environment variables necessary for the Docker image.
The change-url.sh
script is critical for this setup:
This script ensures that the WordPress installation on your local machine correctly reflects the configurations set for the staging environment, including handling media files.
Potential Drawbacks
One limitation of this setup is the synchronization of plugins and themes between local and staging environments. Since WordPress stores active plugin and theme data in the database, discrepancies between environments can lead to plugins being deactivated when accessing the WordPress admin panel.
For themes, this is less of an issue because multiple themes can coexist in the local environment, even if they are not all present on the staging server. The correct theme will be activated each time you run the Docker container, avoiding any major disruptions.
However, for plugins, this can be more problematic. For instance, if you’re using Advanced Custom Fields (ACF) on the staging server, those fields won’t render locally unless ACF is also installed there. To mitigate this, always ensure that the necessary plugins are installed and activated in both environments to maintain consistency.
Conclusion
In this way, Docker can help us speed up WordPress development. At DEside, this approach has drastically improved our development speed, and we continue to explore additional optimizations.
If you have any feedback or suggestions, feel free to share them. You can contribute directly by opening an issue or a pull request on the GitHub repository.
Bonus: a small plugin to avoid media problems
To avoid confusion and ensure media files are always uploaded to the staging server, we developed a small plugin that blocks media uploads on the local server. This plugin serves as a reminder to only upload media on the staging server, ensuring consistency across environments. You can find the plugin on its GitHub repository.