Help your Users fix your Errors

Have you ever encountered an error when using a package and then gone to Google to find out how to solve the error only not to find any clear answers? Wouldn’t you have preferred to go directly to documentation that tells you exactly what went wrong and how to resolve that error? A lot of us can tell similar stories, especially when we try something new. I have also abandoned an otherwise promising package after encountering an error that wasn’t clear or when I didn’t know how to solve the error.

Continue reading “Help your Users fix your Errors”

Reduce Duplication in Pytest Parametrised Tests using the Walrus Operator

Do you find yourself having to repeat literal values (like strings and integers) in parametrised tests? I often find myself in this situation and have been looking for ways of reducing this duplication. To show an example, consider this trivial function:

def get_second_word(text: str) -> str | None:
    """Get the second word from text."""
    words = text.split()
    return words[1] if len(words) >= 2 else None
Continue reading “Reduce Duplication in Pytest Parametrised Tests using the Walrus Operator”

Python TypedDict Arbitrary Key Names with Totality

In PEP 589 Python has introduced type hints for dictionaries. This helps with defining the structure of dictionaries where the keys and the types of the values are well known. TypedDict also supports keys that may not be present using the concept of totality. Using inheritance a dictionary where some keys are not required can be built. The default syntax for defining TypedDict is a class based syntax (example below). Whilst this is easy to understand, it does limit the names of the keys of the dictionary to valid python variable names. If a dictionary needs to include keys with, for example, a dash, the class based syntax no longer is appropriate for defining the TypedDict. There is an alternative syntax (similar to named tuples) that allows for arbitrary key names (example below). However, this syntax does not support inheritance which means that a dictionary with mixed totality cannot be constructed using this syntax alone.

Continue reading “Python TypedDict Arbitrary Key Names with Totality”

Inheritance for SQLAlchemy Models

In software engineering one of the key principles of object oriented software is the concept of inheritance. It can be used to increase code re-use which reduces the volume of tests and speeds up development. You can use inheritance in SQLAlchemy as described here. However, this inheritance is mainly used to describe relationships between tables and not for the purpose of re-using certain pieces of models defined elsewhere.

The openapi specification allows for inheritance using the allOf statement. This means that you could, for example, define a schema for id properties once and re-use that schema for any number of objects where you can customise things like the description that may differ object by object. You can also use allOf to combine objects, which is a powerful way of reducing duplication. You could, for example, define a base object with an id and name property that you then use repeatedly for other objects so that you don’t have to keep giving objects an id and a name.

If this feature could be brought to SQLAlchemy models, you would have a much shorter models.py files which is easier to maintain and understand. The plan for the openapi-SQLAlchemy package is to do just that. The first step has been completed with the addition of support for allOf for column definitions. If you aren’t familiar with the package, the Reducing API Code Duplication article describes the aims of the package.

To start using the column inheritance feature, read the documentation for the feature which describes it in detail and gives and example specification that makes use of it.

openapi-SQLAlchemy Now Supports $ref for Columns

A new version of openapi-SQLAlchemy has been release which adds support for $ref for columns. The package now supports openapi schemas such as the following:

components:
  schemas:
    Id:
      type: integer
      description: Unique identifier for the employee.
      example: 0
    Employee:
      description: Person that works for a company.
      type: object
      properties:
        id:
          $ref: "#/components/schemas/Id"
        name:
          type: string
          description: The name of the employee.
          example: David Andersson.

If you are interested in how this was accomplished with decorators and recursive programming, the following article describes the implementation: Python Recursive Decorators.

Python Recursive Decorators

In Python decorators are a useful tool for changing the behaviour of functions without modifying the original function. For a recent release of openapi-SQLAlchemy, which added support for references among table columns, I used decorators to de-reference columns. I needed a way of supporting cases where a reference to a column was just a reference to another column. The solution was to essentially keep applying the decorator until the column was actually found.

What is a Column Reference?

If you are not familiar with openapi specifications, a simple schema for an object might be the following:

components:
  schemas:
    Employee:
      description: Person that works for a company.
      type: object
      properties:
        id:
          type: integer
          description: Unique identifier for the employee.
          example: 0
        name:
          type: string
          description: The name of the employee.
          example: David Andersson.

To be able to re-use the definition of the id property for another schema, you can do the following:

components:
  schemas:
    Id:
      type: integer
      description: Unique identifier for the employee.
      example: 0
    Employee:
      description: Person that works for a company.
      type: object
      properties:
        id:
          $ref: "#/components/schemas/Id"
        name:
          type: string
          description: The name of the employee.
          example: David Andersson.

In this case, the id property just references the Id schema. This could be done for other property definitions where the same schema applies to reduce duplicate schema definitions.

The Simple Case

The openapi-SQLAlchemy package allows you to map openapi schemas to SQLAlchemy models where an object becomes a table and a property becomes a column. The architecture of the package is broken into a factory for tables which then calls a column factory for each property of an object. The problem I had to solve was that ,when a reference is encountered, the column factory gets called with the following dictionary as the schema for the column (for the id column in this case):

{"$ref": "#/components/schemas/Id"}

instead of the schema for the column. Apart from doing some basic checks, what needs to happen is that the schema of Id needs to be found and the column factory be called with that schema instead of the reference. A perfect case for a decorator! The following is the code (minus some input checks):

_REF_PATTER = re.compile(r"^#\/components\/schemas\/(\w+)$")

def resolve_ref(func):
    """Resolve $ref schemas."""

    def inner(schema, schemas, **kwargs):
        """Replace function."""
        # Checking for $ref
        ref = schema.get("$ref")
        if ref is None:
            return func(schema=schema, **kwargs)

        # Retrieving new schema
        match = _REF_PATTER.match(ref)
        schema_name = match.group(1)
        ref_schema = schemas.get(schema_name)

        return func(schema=ref_schema, **kwargs)

    return inner

The first step is to check if the schema is a reference schema and call the column factory if not (lines 3-4). If it is a reference, then the referenced schema needs to be retrieved (lines 14-16) and the factory called with the referenced schema instead (line 18).

The Recursive Case

The problem with the simple decorator is that the following openapi specification is valid:

components:
  schemas:
    Id:
      $ref: "#/components/schemas/RefId"
    RefId:
      type: integer
      description: Unique identifier for the employee.
      example: 0
    Employee:
      description: Person that works for a company.
      type: object
      x-tablename: employee
      properties:
        id:
          $ref: "#/components/schemas/Id"
        name:
          type: string
          description: The name of the employee.
          example: David Andersson.

Noe that the Id schema just references the RefId schema. When this schema is used, the column factory would now be called with:

{"$ref": "#/components/schemas/RefId"}

That looks a lot like the original problem! The solution is that the decorator needs to be applied again. This also looks like a case for recursive programming. In recursive programming, you have the base case and the recursive case. The base case sounds a lot like the code that checks for whether the schema is a reference (lines 3-4 above). The recursive case needs to take a step towards the base case and apply the function again. In this case this is done by resolving one reference and then calling the decorator again. This means that on line 18 above we need to somehow apply the decorator again. The solution is actually quite simple, the code needs to be changed to the following:

_REF_PATTER = re.compile(r"^#\/components\/schemas\/(\w+)$")

def resolve_ref(func):
    """Resolve $ref schemas."""

    def inner(schema, schemas, **kwargs):
        """Replace function."""
        # Checking for $ref
        ref = schema.get("$ref")
        if ref is None:
            return func(schema=schema, **kwargs)

        # Retrieving new schema
        match = _REF_PATTER.match(ref)
        schema_name = match.group(1)
        ref_schema = schemas.get(schema_name)

        return inner(schema=ref_schema, schemas=schemas, **kwargs)

    return inner

The change is that inner is called instead of the original function func. We then also need to pass in all the required arguments for inner, which in this case, means passing through the schemas which func does not need.

Conclusion

Recursive decorators combine the ideas behind decorators and recursive programming. If the problem a decorator solves can be broken into steps where, after each step, the decorator might need to be applied again, you might have a case for a recursive decorator. The decorator must then implement a base case where the decorator is not applied again and a recursive case where one step is taken towards the base case and the decorator is applied again.

Reducing API Code Duplication

One of the basic principles of good software engineering is the DRY principle – Don’t Repeat Yourself. It means that information should only exist in one place and should not be repeated elsewhere. This leads to code that is easier to maintain since any change only has to be made once, among a range of other benefits.

Practicing the principle is harder than stating it. For example, in the case of an API that is supported by a database, chances are there are overlaps between the database and API schema.

In my experience, there usually is significant overlap between the schema that is defined in the database and the schema that is returned by the linked API. Changes to the database schema might accidently not be properly propagate to the API schema or the API interface documentation might not get updated.

A popular tool for exposing a database schema to a Python application is the SQLAlchemy library. This is usually achieved by defining a models.py file with classes that map to tables in the database. For one of the API endpoints you might retrieve some of these objects from the database, apply some business logic to them and then return them through the API interface.

To communicate to your users how to interact with your API, you might write an openapi specification. You could even go further and use tools like connexion to map endpoints to Python functions for fulfilment. One part of that openapi specification is to define the returned schema of each endpoint.

To get closer to fulfilling the DRY principle you might wish that there was some way to connect the SQLAlchemy models and openapi schema so that you only have to define the schema in one place. To fulfil that wish I started an open source package called openapi-SQLAlchemy.

The aim of the package is to accept an openapi specification and simplify creating the SQLAlchemy models file. The aim for the MVP is the following. Given an openapi specification, when it is read into a python dictionary and passed to the module, a model factory is returned. That model factory can be called with the name of a schema which returns a class that is a valid SQLAlchemy model. For example:

# example-spec.yml
openapi: "3.0.0"

info:
  title: Test Schema
  description: API to illustrate openapi-SQLALchemy MVP.
  version: "0.1"

paths:
  /employee:
    get:
      summary: Used to retrieve all employees.
      responses:
        200:
          description: Return all employees from the database.
          content:
            application/json:
              schema:
                type: array
                items:
                  "$ref": "#/components/schemas/Employee"

components:
  schemas:
    Employee:
      description: Person that works for a company.
      type: object
      properties:
        id:
          type: integer
          description: Unique identifier for the employee.
          example: 0
        name:
          type: string
          description: The name of the employee.
          example: David Andersson.
        division:
          type: string
          description: The part of the company the employee works in.
          example: Engineering
        salary:
          type: number
          description: The amount of money the employee is paid.
          example: 1000000.00
      required:
        - id
        - name
        - division

Normally the following models.py file would be required.

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Float


Base = declarative_base()


class Employee(Base):
    """
    Person that works for a company.

    Attrs:
        id: Unique identifier for the employee.
        name: The name of the employee.
        division: The part of the company the employee works in.
        salary: The amount of money the employee is paid.

    """
    __tablename__ = "employee"
    id = Column(Integer, primary_key=True, autoincrement=True)
    name = Column(String, index=True, nullable=False)
    division = Column(String, index=True, nullable=False)
    salary = Column(Float, nullable=False)

As you can see there is a lot of duplicate information. The aim is to instead pass the specification to openapi-SQLAlchemy and reduce the models.py file to the following:

from yaml import loads
from sqlalchemy.ext.declarative import declarative_base
from openapi_sqlalchemy import ModelFactory


Base = declarative_base()
with open("example-spec.yml") as spec_file:
    SPEC = yaml.load(spec_file)
model_factory = ModelFactory(base=Base, spec=SPEC)


Employee = model_factory(name="Employee")

There is significantly less duplicate information across the specification and models file. The name of the object (Employee) is repeated a few times, although this can be viewed as a reference which means that it is acceptable.

Whilst things like whether a column is nullable can be derived from the required property of the object, there are some additional pieces of information that have to be included in the specification file. For example, not every schema must be a table. Also, the primary key, auto increment and index column modifications are not currently recorded in the specification. The final Employee schema might look like the following.

    Employee:
      description: Person that works for a company.
      type: object
      x-tablename: employee
      properties:
        id:
          type: integer
          description: Unique identifier for the employee.
          example: 0
          x-primary-key: true
          x-autoincrement: true
        name:
          type: string
          description: The name of the employee.
          example: David Andersson.
          x-index: true
        division:
          type: string
          description: The part of the company the employee works in.
          example: Engineering
          x-index: true
        salary:
          type: number
          description: The amount of money the employee is paid.
          example: 1000000.00
      required:
        - id
        - name
        - division

There are more column modifiers that would need to be supported, such as the unique constraint. There are also more column types, such as foreign keys, and there is also inheritance and references that will need to be supported. All of these need to be supported whilst also ensuring that the specification remains valid.

This problem does not seem to be difficult to achieve in Python. There are also some opportunities to use some Python tricks to reduce the amount of code that needs to be written. As I develop this package I expect to write updates on progress and also write articles on some of the Python tricks.

Testing Python Connexion Optional Query Parameter Names

connexion is a great tool for building APIs in Python. It allows you to make the openAPI specification define input validation that is automatically enforced, maps API endpoints to a named function and handles JSON serialising and de-serialising for you.

One of the things to keep in mind when using connexion is that, if you have defined query parameters, only the query parameters that you have define can every get passed to the functions handling the endpoint. For example, consider the following API specification for retrieving employees from a database:
specification.yml

# specification.yml
openapi: "3.0.0"

info:
  title: Sample API
  description: API demonstrating how to check for optional parameters.
  version: "0.1"

paths:
  /employee:
    get:
      summary: Gets Employees that match query parameters.
      operationId: library.employee.search
      parameters:
        - in: query
          name: join_date
          schema:
            type: string
            format: date
          required: false
          description: Filters employees for a particular date that they joined the company.
        - in: query
          name: business_unit
          schema:
            type: string
          required: false
          description: Filters employees for a particular business unit.
        - in: query
          name: city
          schema:
            type: string
          required: false
          description: Filters employees for which city they are working in.
      responses:
        200:
          description: Returns all the Employees in the database that match the query parameters.
          content:
            application/json:
              schema:
                type: object
                properties:
                  first_name:
                    description: The first name of the Employee.
                    type: string
                  last_name:
                    description: The last name of the Employee.
                    type: string

If you call /employee with some_parameter, you don’t receive an error because connexion will not call your function withsome_parameter as it is not defined as a query parameter in the specification and hence gets ignored. To help prove that, consider the following project setup.

Project Setup

The above specification.yml is included in the root directory. The library folder is also in the root directory. Inside the library folder is the following __init__.py file:
library/__init__.py

# library.__init__.py

from . import employee

The library folder contains the employee folder which has the following __init__.py and controller.py files:
library/employee/__init__.py
library/employee/controller.py

# library.employee.__init__.py

from .controller import search
# library.data.employee.controller.py


def search(join_date = None, business_unit = None, city = None):
    return [{'first_name': 'David', 'last_name': 'Andersson'}]

Test Setup

pytest is used to test the API. To help with the API tests, pytest-flask is used. To show that calling the /employee endpoint with parameters that aren’t in the specification does not result in an error, the following test configuration is placed in the root directory:
conftest.py

# conftest.py
from unittest import mock
import pytest
import connexion
import library


@pytest.fixture(scope='session')
def setup_mocked_search():
    """Sets up spy fr search function."""
    with mock.patch.object(library.employee, 'search', wraps=library.employee.search) as mock_search:
        yield mock_search


@pytest.fixture(scope='function')
def mocked_search(setup_mocked_search):
    """Resets search spy."""
    setup_mocked_search.reset_mock()
    return setup_mocked_search


@pytest.fixture(scope='session')
def app(setup_mocked_search):
    """Flask app for testing."""
    # Adding swagger file
    test_app = connexion.FlaskApp(__name__, specification_dir='.')
    test_app.add_api('specification.yml')

    return test_app.app

For now, focus on the app fixture which uses the standard connexion API setup and then yields the flask application. Through the magic of purest-flask, we can then define the following test to demonstrate that the API can be called with parameters not named in the openAPI specification:

def test_some_parameter(client):
    """
    GIVEN a value for some parameter
    WHEN GET /employee is called with the some_parameter set to the some parameter value
    THEN no exception is raised.
    """
    client.get('/employee?some_parameter=value 1')

Verifying Endpoint Parameter Names

This means that making a mistake with the query parameter names in the openAPI specification and the arguments for the function that handles the endpoint can go undetected. To check this is not the case, you could come up with a number of tests that only pass if the query parameter was correctly passed to the search function. This can be tedious without mocking the search function as you would have to create tests with just the right setup so that the effect of a particular query parameter is noticed.

There is an easier way with mocks. Unfortunately, it is not as easy as monkey patching the search function in the body of a test function. The problem is that, after the app fixture has been called, the reference to the search function has been hard coded into the flask application and will not get affected by mocking the search function. It is possible to retrieve the reference to the search function in the flask application instance, but you will have to dig fairly deep and may have to use some private member variables, which is not advisable.

Instead, let’s take another look at the contest.py file above. The trick is to apply a spy to the search function before the app fixture is invoked. The reason a spy rather than a mock is used is because the spy will be in p0lace for all tests and we may want to write a test where the real search function is called! With a spy the underlying function still gets called, which it does not if a mock is used instead of the search function. The fixture on lines 8-12 adds a spy to the search function. To ensure that it gets called before the app fixture, the app fixture takes it as an argument, although it doesn’t make use of the fixture. Because the app fixture is a session level fixture, we also want the fixture that sets up the spy to be a session level fixture, otherwise all the tests will be slowed down. However, this means that the spy state will leak into other tests. This can be solved with a fixture similar to the fixture on lines 15-19 which resets the spy state before each test.

Now we can use the test client to define tests that check that each of the query parameters is passed to the search function. The tests that perform these checks may look like the following:
test.py

# test.py
"""Endpoint tests for /employee"""


def test_some_parameter(client, mocked_search):
    """
    GIVEN library.data.employee.search has a spy and a value for some parameter
    WHEN GET /employee is called with the some_parameter set to the some parameter value
    THEN library.data.employee.search is called with no parameters.
    """
    client.get('/employee?some_parameter=value 1')

    # Checking search call
    mocked_search.assert_called_once_with()


def test_join_date(client, mocked_search):
    """
    GIVEN library.data.employee.search has a spy and a date
    WHEN GET /employee is called with the join_date set to the date
    THEN library.data.employee.search is called with the join date.
    """
    client.get('/employee?join_date=2000-01-01')

    # Checking search call
    mocked_search.assert_called_once_with(join_date='2000-01-01')


def test_business_unit(client, mocked_search):
    """
    GIVEN library.data.employee.search has a spy and a business unit
    WHEN GET /employee is called with the business_unit set to the business unit
    THEN library.data.employee.search is called with the business unit.
    """
    client.get('/employee?business_unit=business unit 1')

    # Checking search call
    mocked_search.assert_called_once_with(business_unit='business unit 1')


def test_city(client, mocked_search):
    """
    GIVEN library.data.employee.search has a spy and a city
    WHEN GET /employee is called with the city set to the city
    THEN library.data.employee.search is called with the city.
    """
    client.get('/employee?city=city 1')

    # Checking search call
    mocked_search.assert_called_once_with(city='city 1')

You now have a starting point for how to test that the function arguments and query parameter names in the specification match. You may not want to create these tests for each query parameter for every endpoint, especially if a query parameter is required, as an exception will be raised because the underlying function will be called with a query parameter which is not listed in the function signature.

Parametrize Angular Jasmine Tests

Functions that map a value from one form to another are useful since you can separate the mapping logic from things like services and components which are best kept as simple as possible. This allows for easy tests where you can check a range of conditions which might take a lot more boilerplate code if the logic was embedded in a component or service. It also allows you to re-use the mapping.

This does lead to simpler test cases, but because they are so simple the test boilerplate can become repetitive and overwhelm what is actually being tested. In practice, these types of tests are often copied and pasted which can lead to more maintenance overhead or outdated commentary around tests. To illustrate, consider the following mapping function.

Transformating Function

The following function takes in a number and returns numbers based on a series of conditional checks.
transform.ts

// transform.ts
export function transform(value: number): number {
  if (value === undefined) {
    return 0;
  }
  if (value === null) {
    return 0;
  }
  if (value < 0) {
    return -1;
  }
  if (value === 0) {
    return 0;
  }
  return 1;
}

Traditional Tests

To test the function you might come up with cases where the input is undefined, null, -10, -1, 0, 1 and 10. Those seem reasonable given the checks and boundaries in the tests. This would result in something like the following test file.
transform.spec.ts

// transform.spec.ts
import { transform } from './transform';

describe('transform', () => {
  describe('input value is undefined', () => {
    it('should return 0', () => {
      expect(transform(undefined)).toEqual(0);
    });
  });

  describe('input value is null', () => {
    it('should return 0', () => {
      expect(transform(null)).toEqual(0);
    });
  });

  describe('input value is -10', () => {
    it('should return -1', () => {
      expect(transform(-10)).toEqual(-1);
    });
  });

  describe('input value is -1', () => {
    it('should return -1', () => {
      expect(transform(-1)).toEqual(-1);
    });
  });

  describe('input value is 0', () => {
    it('should return 0', () => {
      expect(transform(0)).toEqual(0);
    });
  });

  describe('input value is 10', () => {
    it('should return 1', () => {
      expect(transform(10)).toEqual(1);
    });
  });

  describe('input value is 1', () => {
    it('should return 1', () => {
      expect(transform(1)).toEqual(1);
    });
  });
});

The key pieces of information in the tests is that undefined and null map to 0, negative numbers map to -1, 0 maps to 0 and positive numbers map to 1. Compare that simple sentence to 43 lines of code required to implement the tests.

Parametrized Tests

The following shows how the 43 lines of test code can be reduced to 18.
transform.parametrized.spec.ts

// transform.parametrized.spec.ts
import { transform } from './transform';

describe('transform parametrized tests', () => {
  // Defining input value and expected transformation
  [
    { inputValue: undefined, expectedValue: 0 },
    { inputValue: null, expectedValue: 0 },
    { inputValue: -10, expectedValue: -1 },
    { inputValue: -1, expectedValue: -1 },
    { inputValue: 0, expectedValue: 0 },
    { inputValue: 1, expectedValue: 1 },
    { inputValue: 10, expectedValue: 1 }
  ].forEach(({inputValue, expectedValue}) => {
    describe(`input is ${inputValue}`, () => {
      it(`should return ${expectedValue}`, () => {
        expect(transform(inputValue)).toEqual(expectedValue);
      });
    });
  });
});

Lines 7 to 13 clearly demonstrate the expected input to output mapping. Lines 15-19 demonstrate how the mapping is achieved. The forEach loop takes advantage of object destructing to assign an object from the array to parameters of the testing function on line 14. The describe and it functions are passed arguments that include the input and expected value, which makes reading and understanding the output of failed tests easier, on lines 15 and 16, respectively.

There are drawbacks to parametrizing tests and, in a lot of cases, can make tests confusing. In this case, the parameters and code to execute the test fit on the same page in a code editor, which makes it easy to understand. There are also few input parameters and the test logic is simple.

When the number of input parameters gets large, it makes the test harder to read because you have to jump back and forth between the parameter definition and the test code. If the test code is longer and complex, introducing parameters increases the complexity of the test even further. If parametrizing the test means you have to significantly increase the logic of the test, it means the test is hard to understand when you come back to it a few months down the track.

However, with few input parameters and a function for which little test logic is required, using parametrised tests reduces boilerplate code and makes the tests easier to understand. TypeScript also helps as it will infer the types of the input and expected parameters for you.