Module Contents#



A base class for LCI backends.

class bw2data.backends.base.LCIBackend(name)[source]#

Bases: bw2data.data_store.ProcessedDataStore

Inheritance diagram of bw2data.backends.base.LCIBackend

A base class for LCI backends.

Subclasses must support at least the following calls:

  • load()

  • write(data)

In addition, they should specify their backend with the backend attribute (a unicode string).

LCIBackend provides the following, which should not need to be modified:

  • rename

  • copy

  • find_dependents

  • random

  • process

For new classes to be recognized by the DatabaseChooser, they need to be registered with the config object, e.g.:

config.backends['backend type string'] = BackendClass

Instantiation does not load any data. If this database is not yet registered in the metadata store, a warning is written to stdout.

The data schema for databases in voluptuous is:

exchange = {
        Required("input"): valid_tuple,
        Required("type"): basestring,
lci_dataset = {
    Optional("categories"): Any(list, tuple),
    Optional("location"): object,
    Optional("unit"): basestring,
    Optional("name"): basestring,
    Optional("type"): basestring,
    Optional("exchanges"): [exchange]
db_validator = Schema({valid_tuple: lci_dataset}, extra=True)
  • valid_tuple is a dataset identifier, like ("ecoinvent", "super strong steel")

  • uncertainty_fields are fields from an uncertainty dictionary.

Processing a Database actually produces two parameter arrays: one for the exchanges, which make up the technosphere and biosphere matrices, and a geomapping array which links activities to locations.


*name* (unicode string) – Name of the database to manage.

property filename[source]#

Remove filesystem-unsafe characters and perform unicode normalization on using utils.safe_filename().

dtype_fields = [(), (), (), (), ()][source]#
dtype_fields_geomapping = [(), (), (), ()][source]#

Make a copy of the database.

Internal links within the database will be updated to match the new database name, i.e. ("old name", "some id") will be converted to ("new name", "some id") for all exchanges.


name (*) – Name of the new database. Must not already exist.


Delete data from this instance. For the base class, only clears cached data.

abstract filepath_intermediate()[source]#
find_dependents(data=None, ignore=None)[source]#

Get sorted list of direct dependent databases (databases linked from exchanges).

  • data (*) – Inventory data

  • ignore (*) – List of database names to ignore


List of database names


Recursively get list of all dependent databases.


A set of database names

abstract load(*args, **kwargs)[source]#

Load the intermediate data for this database.

If load() does not return a dictionary, then the returned object must have at least the following dictionary-like methods:

  • __iter__

  • __contains__

  • __getitem__

  • __setitem__

  • __delitem__

  • __len__

  • keys()

  • values()

  • items()

  • items()

However, this method must support the keyword argument as_dict, and .load(as_dict=True) must return a normal dictionary with all Database data. This is necessary for JSON serialization.

It is recommended to subclass collections.{abc.}MutableMapping (see SynchronousJSONDict for an example of data loaded on demand).

process(*args, **kwargs)[source]#

Process inventory documents.

Creates both a parameter array for exchanges, and a geomapping parameter array linking inventory activities to locations.

If the uncertainty type is no uncertainty, undefined, or not specified, then the ‘amount’ value is used for ‘loc’ as well. This is needed for the random number generator.


version (*) – The version of the database to process

Doesn’t return anything, but writes two files to disk.


Search through the database.


Return a random activity key.

Returns a random activity key, or None (and issues a warning) if the current database is empty.


Register a database with the metadata store.

Databases must be registered before data can be written.

Writing data automatically sets the following metadata:
  • depends: Names of the databases that this database references, e.g. “biosphere”

  • number: Number of processes in this database.


format (*) – Format that the database was converted from, e.g. “Ecospold”

relabel_data(data, new_name)[source]#

Relabel database keys and exchanges.

In a database which internally refer to the same database, update to new database name new_name.

Needed to copy a database completely or cut out a section of a database.

For example:

data = {
    ("old and boring", 1):
        {"exchanges": [
            {"input": ("old and boring", 42),
            "amount": 1.0},
    ("old and boring", 2):
        {"exchanges": [
            {"input": ("old and boring", 1),
            "amount": 4.0}
print(relabel_database(data, "shiny new"))
>> {
    ("shiny new", 1):
        {"exchanges": [
            {"input": ("old and boring", 42),
            "amount": 1.0},
    ("shiny new", 2):
        {"exchanges": [
            {"input": ("shiny new", 1),
            "amount": 4.0}

In the example, the exchange to ("old and boring", 42) does not change, as this is not part of the updated data.

  • data (*) – The data to modify

  • new_name (*) – The name of the modified database


The modified data


Rename a database. Modifies exchanges to link to new name. Deregisters old database.


name (*) – New name.


New Database object.

abstract write(data)[source]#

Serialize data to disk.

data must be a dictionary of the form:

    ('database name', 'dataset code'): {dataset}