Adaptation between Python and PostgreSQL types

Many standard Python types are adapted into SQL and returned as Python objects when a query is executed.

The following table shows the default mapping between Python and PostgreSQL types:

TODO: complete table

Python

PostgreSQL

See also

bool

bool

Booleans adaptation

float

real
double

Numbers adaptation

int

smallint
integer
bigint

Decimal

numeric

str

varchar
text

Strings adaptation

bytea

Binary adaptation

date

date

TODO adaptation

time

time
timetz

datetime

timestamp
timestamptz

timedelta

interval

list

ARRAY

TODO adaptation

tuple
namedtuple

Composite types

TODO adaptation

dict

hstore

TODO adaptation

Psycopg’s Range

range

TODO adaptation

Anything™

json

JSON adaptation

UUID

uuid

TODO adaptation

ipaddress objects

inet
cidr

TODO adaptation

Booleans adaptation

Python bool values True and False are converted to the equivalent PostgreSQL boolean type:

>>> cur.execute("SELECT %s, %s", (True, False))
# equivalent to "SELECT true, false"

Numbers adaptation

Python int values are converted to PostgreSQL bigint (a.k.a. int8). Note that this could create some problems:

  • Python int is unbounded. If you are inserting numbers larger than 2^63 (so your target table is numeric, or you’ll get an overflow on arrival…) you should convert them to Decimal.

  • Certain PostgreSQL functions and operators, such a date + int expect an integer (aka int4): passing them a bigint may cause an error:

    cur.execute("select current_date + %s", [1])
    # UndefinedFunction: operator does not exist: date + bigint
    

    In this case you should add an ::int cast to your query or use the Int4 wrapper:

    cur.execute("select current_date + %s::int", [1])
    
    cur.execute("select current_date + %s", [Int4(1)])
    

    TODO

    document Int* wrappers

Python float values are converted to PostgreSQL float8.

Python Decimal values are converted to PostgreSQL numeric.

On the way back, smaller types (int2, int4, flaot4) are promoted to the larger Python counterpart.

Note

Sometimes you may prefer to receive numeric data as float instead, for performance reason or ease of manipulation: you can configure an adapter to cast PostgreSQL numeric to Python float. This of course may imply a loss of precision.

Strings adaptation

Python str is converted to PostgreSQL string syntax, and PostgreSQL types such as text and varchar are converted back to Python str:

conn = psycopg3.connect()
conn.execute(
    "insert into strtest (id, data) values (%s, %s)",
    (1, "Crème Brûlée at 4.99€"))
conn.execute("select data from strtest where id = 1").fetchone()[0]
'Crème Brûlée at 4.99€'

PostgreSQL databases have an encoding, and the session has an encoding too, exposed in the Connection.client_encoding attribute. If your database and connection are in UTF-8 encoding you will likely have no problem, otherwise you will have to make sure that your application only deals with the non-ASCII chars that the database can handle; failing to do so may result in encoding/decoding errors:

# The encoding is set at connection time according to the db configuration
conn.client_encoding
'utf-8'

# The Latin-9 encoding can manage some European accented letters
# and the Euro symbol
conn.client_encoding = 'latin9'
conn.execute("select data from strtest where id = 1").fetchone()[0]
'Crème Brûlée at 4.99€'

# The Latin-1 encoding doesn't have a representation for the Euro symbol
conn.client_encoding = 'latin1'
conn.execute("select data from strtest where id = 1").fetchone()[0]
# Traceback (most recent call last)
# ...
# UntranslatableCharacter: character with byte sequence 0xe2 0x82 0xac
# in encoding "UTF8" has no equivalent in encoding "LATIN1"

In rare cases you may have strings with unexpected encodings in the database. Using the SQL_ASCII client encoding (or setting client_encoding = "ascii") will disable decoding of the data coming from the database, which will be returned as bytes:

conn.client_encoding = "ascii"
conn.execute("select data from strtest where id = 1").fetchone()[0]
b'Cr\xc3\xa8me Br\xc3\xbbl\xc3\xa9e at 4.99\xe2\x82\xac'

Alternatively you can cast the unknown encoding data to bytea to retrieve it as bytes, leaving other strings unaltered: see Binary adaptation

Note that PostgreSQL text cannot contain the 0x00 byte. If you need to store Python strings that may contain binary zeros you should use a bytea field.

Binary adaptation

Python types representing binary objects (bytes, bytearray, memoryview) are converted by default to bytea fields. By default data received is returned as bytes.

todo

Make sure bytearry/memoryview work and are compsable with arrays/composite

If you are storing large binary data in bytea fields (such as binary documents or images) you should probably use the binary format to pass and return values, otherwise binary data will undergo ASCII escaping, taking some CPU time and more bandwidth. See Binary parameters and results for details.

JSON adaptation

psycopg3 can map between Python objects and PostgreSQL json/jsonb types, allowing to customise the load and dump function used.

Because several Python objects could be considered JSON (dicts, lists, scalars, even date/time if using a dumps function customised to use them), psycopg3 requires you to wrap what you want to dump as JSON into a wrapper: either psycopg3.types.Json or Jsonb.

from psycopg3.types import Jsonb

thing = {"foo": ["bar", 42]}
conn.execute("insert into mytable values (%s)", [Jsonb(thing)])

By default psycopg3 uses the standard library json.dumps() and json.loads() functions to serialize and de-serialize Python objects to JSON. If you want to customise globally how serialization happens, for instance changing serialization parameters or using a different JSON library, you can specify your own functions using the psycopg3.types.set_json_dumps() and set_json_loads() functions.

from functools import partial
from psycopg3.types import Jsonb, set_json_dumps, set_json_loads
import ujson

# Use a faster dump function
set_json_dumps(ujson.dumps)

# Return floating point values as Decimal
set_json_loads(partial(json.loads, parse_float=Decimal))

conn.execute("select %s", [Jsonb({"value": 123.45})]).fetchone()[0]
# {'value': Decimal('123.45')}

If you need a more precise customisation, such as per-connection instead of global, you can subclass and register the JSON adapters in the right context: see JSON adapters.

TODO adaptation

TODO

Document the other types