FastAPI 0.95.0 upgrade – problem with pydantic unions

FastAPI 0.95.0 came with a great enhancement: support for dependencies and parameters usingΒ Annotated. But more about this I’ll wrote in a separate blog post. While updating my project dependency and changing FastAPI version from 0.94.1 to 0.95.0 I encountered a problem connected with a regression (?) that was introduced with above enhancement and adding support for PEP-593 in general. There is a problem with pydantic unions used in body params.

Problem context

In the code there is a place where I use pydantic union to indicate body parameter type. A quick example of how it works:

from typing import Literal, Union
from typing_extensions import Annotated

from fastapi import FastAPI
from pydantic import BaseModel, Field

class FirstType(BaseModel):
    type_name: Literal["first"] = "first"

class SecondType(BaseModel):
    type_name: Literal["second"] = "second"

ItemType = Annotated[
    Union[FirstType, SecondType],

app = FastAPI()'/')
async def create(item: ItemType) -> None:

In above example Field with discriminator set to type_name will indicate if item is of type FirstType or SecondType automatically by required and validated type_name value (will only accept “first” or “second” values). It was done based on pydantic docsDiscriminated Unions.

And it was working great… till introducing the upgrade. After the upgrade the problem started failing a bunch of unit tests with an assert:

AssertionError: Param: item can only be a request body, using Body()

Working solution

While a few moments of research I found out that the problem was already reported. And the suggested solution I tried and tested is to replace pydantic.Field with fastapi.Body.

So the above example with have a change:

from typing import Literal, Union
from typing_extensions import Annotated

- from fastapi import FastAPI
- from pydantic import BaseModel, Field
+ from fastapi import Body, FastAPI
+ from pydantic import BaseModel

class FirstType(BaseModel):
    type_name: Literal["first"] = "first"

class SecondType(BaseModel):
    type_name: Literal["second"] = "second"

ItemType = Annotated[
    Union[FirstType, SecondType],
-   Field(discriminator="type_name"),
+   Body(discriminator="type_name"),

app = FastAPI()'/')
async def create(item: ItemType) -> None:

That’s all. As @phillipuniverse mentioned under the proposed solution, it probably shouldn’t even work before and usage of Body class came with an enhancement to automatically generated OpenAPI. “Before, the union had generated anΒ anyOf, but now it generates the correct discriminated union withΒ oneOf“.

Database isolation layers

Last time I wrote about ACID properties. Isolation is one of ACID transaction properties, along with atomicity, consistency and durability.

There are four isolation levels defined by ANSI/ISO SQL standard:

  • serializable,
  • repeatable reads,
  • read committed,
  • read uncommitted.

Read phenomena

The standard describes also three different read phenomena which behaviour and result varies depending on chosen isolation level:

  • a dirty read – a transaction can read not yet committed data modified by another transaction,
  • a non-repeatable read – during a transaction a row is being read twice and the values within the row are different,
  • a phantom read – another transaction make changes by adding or removing some rows. Depending on isolation level, the first transaction could return the same values as before, even if some of them were removed or added by the second transaction.

Isolation layers and read phenomena

So now when I mentioned isolation layers and read phenomena, how they meet each other?

  • serializable:
    • keeps read and write locks to the end of the transaction,
    • range-locks must be acquired when a SELECT query uses a range WHERE clause (to avoid the phantom reads).
  • repeatable reads:
    • keeps read and write locks to the end of the transaction,
    • range-locks are not managed (phantom reads are possible),
    • write skew (a phenomenon where two writes are allowed to the same column by two different writers) is possible at this isolation level in some systems.
  • read committed:
    • keeps only write locks to the end of the transaction (non-repeatable reads phenomenon can occur in this isolation level),
    • range-locks are not managed (phantom reads are possible).
  • read uncommitted:
    • one transaction may see not-yet-committed changes made by other transactions (dirty reads are possible).

Isolation layer in databases

Isolation layer is typically defined at database level. As I work mainly with PostgreSQL, let me focus on that database. PostgreSQL supports all four layers but internally only three of them are implemented (current for version 12th which I use and current also in version 14th that is the latest available version at the time I published this post): read uncommitted mode behaves like read committed. PostgreSQL also guarantees that on repeatable read level, phantom reads are not possible!

In PostgreSQL the default isolation level is read committed but you can set a different isolation level by using  SET TRANSACTION command.

Databases: ACID transactions

I was asked about ACID term recently and I was a little confused because I don’t use this term on a daily basis. As a backend developer I use database transactions a lot, but ACID term was pretty new for me.

ACID is an acronym formed from the initial letters of 4 words:

  • atomicity,
  • consistency,
  • isolation,
  • durability.

And you know what? Database transaction has all of these properties.

Atomicity guarantees that all statements in transaction are being treated as a single unit. All or none of them are committed and in case of a problem with individual statement, database state isn’t changed.

Consistency means that data produced within transaction was validated against defined rules, constraints etc.

In case of concurrency Isolation allows to run many transactions at the same time, but without knowing state or operations performed within parallel transactions. Operating on data policy is defined by database isolation level (to be described in the next post).

Durability ensures that data committed by transaction will be persisted in database.

Above set of properties gathered together in 1983 by Andreas Reuter and Theo HΓ€rder guarantees data validity despite any possible errors (including power failures πŸ˜‰ ).

M1 Pro device: problems with docker compose

During the last days I was trying to run dockerized projects on MacBook Pro with an Apple silicon – M1 Pro processor. I was using docker compose and encountered a few problems during this process.

In the problematic project I have two Dockerfiles. One is based on python:3.8.5-slim image and the second one on node:12 image.

I encountered problems in both, backend (python based) and frontend (node based) images.

I made a few steps to make things working (I don’t mention everything and please do not follow now. TL;DR below):

  1. updated pip while building the image, so I added RUN pip install --upgrade pip line before installing python dependencies
  2. updated versions number of a few project dependencies. Just to mention that psycopg2-binary==2.8.4 was problematic and psycopg2-binary==2.9.3 started working properly
  3. I also had to add g++ to my RUN apt-get install -y line.
  4. I also had a problem with frontend image, but fixing one problem caused another with gulp-imagemin. I reported it here:

Today I found the solution here ( that allows me to run and build my app without any problems and changes mentioned above*. The only thing I had to change was adding --platform=linux/amd64, so changing lines:

FROM python:3.8.5-slim
FROM node:12


FROM --platform=linux/amd64 python:3.8.5-slim
FROM --platform=linux/amd64 node:12

I hope it could save you a day too. Let me know if it works on your machine!

* ok, there’s still a problem with one my minor dependencies when running the app. I reported it here , but beside of this the solution works.

Homebrew: uninstall unneeded dependencies

While dealing with setting up my new MacBook Pro with Apple Silicon M1 Pro chip I chose not to use the image from my older MacBook Pro 2015. I decided to install everything I need from scratch.

I encountered the problem with uninstalling brew packages and all their dependencies that are not needed by another packages. Maybe it is not a problem, but just nice to know practice.

Firstly I thought I will need gcc so I typed:

➜  ~ brew install gcc

Installing gcc package resulted with installing a few dependences under the hood

➜  ~ brew deps --tree --installed gcc
β”œβ”€β”€ gmp
β”œβ”€β”€ isl
β”‚   └── gmp
β”œβ”€β”€ libmpc
β”‚   β”œβ”€β”€ gmp
β”‚   └── mpfr
β”‚       └── gmp
β”œβ”€β”€ mpfr
β”‚   └── gmp
└── zstd

After a while I decided I don’t need gcc so I installed it with

➜  ~ brew uninstall gcc

But some dependencies were not removed along with gcc. The best suggestion I found (here) to list them and remove is to use brew autoremove command.

With option -n you can list unneeded dependencies that could be uninstalled:

➜  ~ brew autoremove -n
==> Would uninstall 4 unneeded formulae:

and execute brew autoremove to uninstall them

➜  ~ brew autoremove
==> Uninstalling 4 unneeded formulae:
Uninstalling /opt/homebrew/Cellar/zstd/1.5.1... (31 files, 2.4MB)
Uninstalling /opt/homebrew/Cellar/isl/0.24... (73 files, 7MB)
Uninstalling /opt/homebrew/Cellar/libmpc/1.2.1... (12 files, 415.7KB)
Uninstalling /opt/homebrew/Cellar/mpfr/4.1.0... (30 files, 5.2MB)

Thanks for reading. Do you know any better solution? Don’t hesitate to leave a comment!