Dockerize your FastAPI application that incorporates alembic and sqlachemy. Solve the permission issue between the container and host for the migrations.Dockerize your FastAPI application that incorporates alembic and sqlachemy. Solve the permission issue between the container and host for the migrations.

Solving the FastAPI, Alembic, Docker Problem

2025/12/05 14:26

I was working on a FastAPI application and wanted to dockerize for development, you know the usual, Dockerfile, Docker compose and the lot.

It was meant to be a straightforward process until I ran my alembic migration command and I realized I didn't see the version of the alembic migration in my local directory. The natural solution is to use a bind mount, which once more, should be an easy fix until it wasn't.

I checked out different solutions and I found this one that proposed I use a bind mount but can't simultaneously use the watch process of compose. I'm just too stubborn to accept that solution. I wanted to have my cake and eat it! I wanted my versions in my localhost after it's generated in my container and I also want the "watch" feature from compose. The major problem was permission issues between the host and container.

This write up was my fix and should work for folks using Flask. The brilliant UV (I highly recommend) is used for package dependency management . Django users can check this out. Enough story, let's setup.

Dockerfile

# 1. Use Python 3.13 bookworm as the base FROM ghcr.io/astral-sh/uv:python3.13-bookworm # 2. Set the working directory WORKDIR /app # 3. Set environment variables for UV # UV_COMPILE_BYTECODE: Compiles python files to .pyc for faster startup # UV_LINK_MODE: copy (safer for docker layers than hardlinks) ENV UV_COMPILE_BYTECODE=1 ENV UV_LINK_MODE=copy # 4. create a non-root user for security RUN useradd -u 1000 app # 5. Set HOME explicitly to /app # This prevents the "/nonexistent/.cache/uv" error by telling tools # to look for their config/cache in /app instead of /nonexistent ENV HOME=/app # 6. Ensures the /app directory (and everything inside) is owned by the app user # We run this BEFORE switching users RUN chown -R app:app /app # 7. Copy pyproject.toml and uv.lock with correct ownership # changing ownership during COPY is more efficient than running chown later COPY --chown=app:app pyproject.toml uv.lock ./ # 8. Switch to the non-root user BEFORE installing dependencies USER app # 9. Install dependencies # Since we are now the 'app' user, the .venv will be owned by 'app' # and the cache will be written to /app/.cache/uv. RUN uv sync --frozen --no-install-project --no-dev # 10. Copy the rest of the application code COPY --chown=app:app . . RUN uv sync --frozen --no-dev # 11. Expose the port EXPOSE 8000 # 12. Command to run the application # For production: CMD ["uv", "run", "fastapi", "run", "main.py" ... # This assumes your main.py is in root, I usually have mine in src/main.py # so change it accordingly CMD ["uv", "run", "fastapi", "dev", "main.py", "--host", "0.0.0.0", "--port", "8000"]

I will explain some of my choices here, the comments should suffice for the rest.

In (4.) above, We create a user and assign a UID(user ID) value of 1000, when we create files, it is assigned a UID and GID(group ID), we specifically prefer to use 1000 because many Linux distributions assign that to the first regular user created on a system. Hence, matching 1000 can align container UID with host UID for easier file permission mapping. Feel free to switch things if this is not your use case since you now understand the rationale.

In (5.) above, I am using uv for my package and project management on my host and it just makes sense to continue that. When you install a dependency with uv, it creates a cache of that dependency globally ( i.e in your system directory) especially if you re-install a dependency or use in another project. It stores them usually in $HOME/.cache/uv, by setting this variable to /app, we are making sure it is stored in /app/.cache/uv. If we don't do that, we get /nonexistent/.cache/uv which is either “unwritable” or absent and causes errors. You can also choose not to have a cache and it will reduce you final image size greatly especially for production, but for development, I choose to do that. You can use the UVNOCACHE flag.

Let us also look at (9.) above. Continuing from (5.) above, you could also include the --no-cache flag here.

--frozen: uv.lock ensures consistency installs across environments. --no-install-project: is used to avoid installing the project here for optimal layer caching. By separating this from the install, when the source changes, this does not get rebuilt, the install layer(10) is done again, making your image rebuild even faster. --no-dev: Avoids installing development dependencies.

Lovely! congratulations if you made it through that. It becomes easier from here!

Docker Compose

Let's write the docker-compose.yaml (yes, the docker docs prefer yaml over yml). The aim is to build our fastapi application and have it connected to a postgres database. Let us write it and explain a few things:

services: db: image: postgres env_file: - .env ports: - "5432:5432" volumes: - postgres_data:/var/lib/postgresql restart: always api: build: . env_file: - .env depends_on: - db ports: - "8000:8000" develop: watch: - path: ./src action: sync target: /app/src - path: ./pyproject.toml action: rebuild working_dir: /app volumes: - ./migrations:/app/migrations:z - ./migrations/versions:/app/migrations/versions:z volumes: postgres_data:

I love my docker compose file looking neat and simple and typically avoid unnecessary variable. For example, I simply usually can not wrap my head around why people choose to use environment and create a long list of environment variables in the yaml file thus excessively lengthening the file. Having a .env file and just specifying it, works a treat. You should follow these best practices for a smooth output.

Pro tip: Just use os.environ to load them in your python file.

Pro tip #2: You can view those variables with docker compose config \n The key part to explain which is largely the reason why I wrote this is the “api.volumes“ section. We create a bind mount separately for the migrations and for versions under migrations. The important bit is in fact the "z" that follows. Without it, we keep getting permission errors. What does it do? In our context, it tells docker to relabel the SELinux security context of the host path so container processes can access it. Thus, I am able to use the bind mount for my migrations and hence, see my versions in my host each time I run a migration command in the container as well as use the watch feature of compose. Finally, we eat out cake and still have it!

PS: For users on macOS or Windows (which are common Docker development environments), this flag may be unnecessary and subsequently be ignored. You can test that out!

.dockerignore

__pycache__/ .venv .ruff_cache ./src/email_templates/ .env

Finally…

I suggest creating a directory for your alembic migration say "migrations" and sub-directory "versions", run your alembic init. Do not forget to change the script_location variable in your alembic.ini as well as edit the appropriate part of your env.py.

docker compose up --watch

In a second terminal,

docker compose exec api uv run alembic revision --autogenerate -m "Initial" docker compose exec api uv run alembic upgrade head

Conclusion

If you completed this, I hope you have learnt a thing or two. A lot of these are my personal opinion and solution so you may find yourself not agreeing with some, that's okay. Feel free to send me a message on LinkedIn. Thanks!!

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Dogecoin, HBAR Rank High On Watchlists But One Crypto Is Stealing The Show

Dogecoin, HBAR Rank High On Watchlists But One Crypto Is Stealing The Show

The post Dogecoin, HBAR Rank High On Watchlists But One Crypto Is Stealing The Show appeared on BitcoinEthereumNews.com. Crypto traders searching for the best crypto to buy now are keeping a close eye on Dogecoin (DOGE) and Hedera (HBAR), two altcoins that remain top picks for September. DOGE continues to benefit from its loyal community and brand recognition, while HBAR’s enterprise partnerships keep it relevant as a layer-1 solution. But despite these strong contenders, analysts say one project is stealing the show — Layer Brett ($LBRETT), a fast-growing Ethereum Layer 2 that has taken the market by storm. Why Dogecoin and HBAR are still relevant Dogecoin remains a fan favorite, with its meme status and history of viral rallies making it a top speculative asset. Analysts believe DOGE could see another strong run in the next bull market, especially if Elon Musk tweets about it or if a DOGE payment integration is announced. In 2021, DOGE’s price rallied thousands of percent, proving that viral moments can still drive massive upside when the community is fully engaged. HBAR, meanwhile, is considered one of the most technically advanced layer 1 blockchains, its hashgraph consensus and enterprise partnerships gave it a unique edge. Projects in sectors like supply chain, tokenized assets, and enterprise data security continue to choose HBAR, which helps support steady price appreciation. Price predictions for HBAR suggest consistent growth into 2026 as adoption expands. Layer Brett: The real market disruptor While DOGE and HBAR are strong players, Layer Brett is where traders are seeing the most explosive potential. Built on Ethereum Layer 2, $LBRETT offers lightning-fast transactions, near-zero fees, and security backed by Ethereum. Its rapidly growing social presence, with thousands of new community members joining weekly, is driving massive buzz. Analysts say this mix of speed, low cost, and meme energy is creating a narrative that could dominate the next bull run. Key reasons analysts are calling…
Share
BitcoinEthereumNews2025/09/21 06:34
Will Bitcoin Beat S&P 500 Index? ‘Forever,’ Says Michael Saylor

Will Bitcoin Beat S&P 500 Index? ‘Forever,’ Says Michael Saylor

The post Will Bitcoin Beat S&P 500 Index? ‘Forever,’ Says Michael Saylor appeared on BitcoinEthereumNews.com. In recent Bitcoin news, Strategy CEO Michael Saylor once again made a bold claim about the future of Bitcoin (BTC USD). He said that Bitcoin will outperform the S&P 500 “forever.” According to him, the index would lose nearly 29% in value each year when compared to the top cryptocurrency. In his statement, Saylor highlighted Bitcoin’s strength as a long-term investment. He believes its fixed supply and global adoption will continue to drive its value higher. On the other hand, he argued that a traditional index like the S&P 500 will struggle to keep pace. Bitcoin News: Why is it “Digital Capital,” Stronger Than S&P 500 In his interview with Coin Stories, MicroStrategy executive chairman, Michael Saylor, explained Bitcoin was a unique digital investment vehicle. According to him, it grows in value much faster than traditional assets. Saylor noted that the S&P 500’s average return is often treated as the standard measure of investment growth. However, he emphasized that Bitcoin (BTC USD) consistently outpaces this benchmark. This difference, he said, highlights a clear performance gap. Because of this, Saylor believes a major financial shift is taking place. He argued that Bitcoin is emerging as a superior choice for investors, an increasingly popular opinion as witnessed in recent news. In his view, it also serves as stronger collateral compared to traditional assets. In his view, Bitcoin’s steady appreciation gives investors a chance to create new forms of credit backed by the asset. He explained that Bitcoin-backed loans could last longer, deliver higher returns, and reshape global finance. Michael Saylor also highlighted that this perspective influenced his role in policy discussions. Recently, he joined other crypto executives in a meeting to advocate for the strategic Bitcoin reserve bill. In addition, he compared Bitcoin’s reliability with weakness in traditional currencies. He argued that…
Share
BitcoinEthereumNews2025/09/20 18:34