Installation

The package is available on pypi. Please use below command to install it.

$pip install -U ibm-aigov-facts-client

Requirements

  • Only Python 3.7 or newer is supported. ibm-cloud-sdk-core ==3.10.1 is required to be able to authenticate to the service.

  • Fro factsheet service, Watson Knowledge Catalog is a requirement

  • If using Space container, Watson Machine Learning is a requirement

  • If using Project container, Watson Studio is a requirement

Supported engines

  • Watson Machine Learning

  • (External) Azure ML service

  • (External) AWS Sagemaker

Guidelines and example workflow

Note

  • Initiate facts client at the top of the notebook before importing anything.For external models, it can be imported in training custom scripts.

  • Each use case can be referred as Experiment and any training with one or different machine learning frameworks are considered as Run where each experiment can have multiple runs.

  • It is advised to use one Experiment for any given use case and notebook to avoid creating multiple experiments for no reason as that will impact lineage tracking and comparison.

  • Models stored in Watson Machine Learning should have custom meta attributes defined using given utility (see Watson Machine Learning Elements) for lineage tracking.

  • For native learners in external providers (i.e., AWS Sagemaker: Linear learner etc.), autolog is not supported, manual log option should be used.

_images/basic_workflow.png _images/external_workflow.png

Basic logging elements

AIGovFactsClient

class FactsClientAdapter(experiment_name: str, container_type: Optional[str] = None, container_id: Optional[str] = None, api_key: Optional[str] = None, set_as_current_experiment: Optional[bool] = False, enable_autolog: Optional[bool] = True, external_model: Optional[bool] = False)

Bases: ibm_aigov_facts_client.base_classes.auth.FactsAuthClient

AI GOVERNANCE FACTS CLIENT

Variables

version (str) – Returns version of the python library.

Parameters
  • experiment_name (str) – Name of the Experiment.

  • container_type (str) – (optional) Name of the container where model would be saved. Currently supported options are SPACE or PROJECT. It is (Required) when using IBM Cloud.

  • container_id (str) – (optional) container id specific to container type.It is (Required) when using IBM Cloud.

  • set_as_current_experiment (bool) – (optional) if True new experiment will not be created if experiment already exists with same experiment name.By default set to False.

  • enable_autolog (bool) – (optional) if False, manual log option will be available. By default set to True.

  • external_model (bool) – (optional) if True, external models tracing would be enabled. By default set to False.

A way you might use me is:

For IBM Cloud:

>>> from ibm_aigov_facts_client import AIGovFactsClient
>>> client = AIGovFactsClient(api_key=<API_KEY>, experiment_name="test",container_type="space",container_id=<space_id>)
>>> client = AIGovFactsClient(api_key=<API_KEY>,experiment_name="test",container_type="project",container_id=<project_id>)

If using existing experiment as current:

>>> client = AIGovFactsClient(api_key=<API_KEY>, experiment_name="test",container_type="space",container_id=<space_id>,set_as_current_experiment=True)

If using external models with manual log:

>>> client= AIGovFactsClient(api_key=API_KEY,experiment_name="external",enable_autolog=False,external_model=True)

If using external models with Autolog:

>>> client= AIGovFactsClient(api_key=API_KEY,experiment_name="external",external_model=True)

For Standalone use in localhost without factsheet functionality:

>>> from ibm_aigov_facts_client import AIGovFactsClient
>>> client = AIGovFactsClient(experiment_name="test")

AIGovFactsClient.FrameworkSupportNames

class FrameworkSupportOptions

To view supported frameworks , use:

>>> client.FrameworkSupportNames.show()

To see only supported framework names , use:

>>> client.FrameworkSupportNames.get()

Set of Supported Frameworks for Auto/Manual logging.

Available Options:

Name

Framework Name

Autolog

Version

scikit

sklearn

Y

0.20.3 <= scikit-learn <= 0.24.2

Tensorflow

tensorflow

Y

1.15.4 <= tensorflow <= 2.6.0

Keras

keras

Y

2.2.4 <= keras <= 2.6.0

PySpark

pyspark

Y

3.0.0 <= pyspark <= 3.1.2

Xgboost

xgboost

Y

0.90 <= xgboost <= 1.4.2

LightGBM

lightgbm

Y

2.3.1 <= lightgbm <= 3.2.1

PyTorch

pytorch

Y

1.0.5 <= pytorch-lightning <= 1.4.4

Scikit learn

class FrameworkSupportSklearn

Current autolog support scope for Scikit learn.

Available Options:

Framework Name

Trigger Methods

Training Metrics

Parameters

Tags

Post Training Metrics

Search Estimators

sklearn

  • estimator.fit()

  • estimator.fit_predict()

  • estimator.fit_transform()

Classifier:

  • precision score

  • recall score

  • f1 score

  • accuracy score

If the classifier has method predict_proba

  • log loss

  • roc auc score

Regression:

  • mean squared error

  • root mean squared error

  • mean absolute error

  • r2 score

estimator.get_params(deep=True)

  • estimator class name(e.g. “LinearRegression”)

  • fully qualified estimator class name(e.g. “sklearn.linear_model._base.LinearRegression”)

Scikit-learn metric APIs:

  • model.score

  • metric APIs defined in the sklearn.metrics module

Note:

  • metric key format is: {metric_name}[-{call_index}]_{dataset_name}

  • if sklearn.metrics: metric_name is the metric function name

  • if model.score, then metric_name is {model_class_name}_score

If multiple calls are made to the same scikit-learn metric API

each subsequent call adds a “call_index” (starting from 2) to the metric key

Meta estimator:

  • Pipeline,

  • GridSearchCV ,

  • RandomizedSearchCV

It logs child runs with metrics for each set of

explored parameters, as well as parameters

for the best model and the best parameters (if available)

Note

  • In notebook setting, if post training metrics need to be logged, make sure including all calculations in one cell with estimator fit() call. Metrics calculation not part of the same cell will not be logged automatically.

  • Scikit-learn metric APIs invoked on derived objects do not log metrics

  • If user define a scorer which is not based on metric APIs in sklearn.metrics, then then post training metric autologging for the scorer is invalid

PySpark

class FrameworkSupportSpark

Current autolog support scope for Spark.

Available Options:

Framework Name

Trigger Methods

Training Metrics

Parameters

Tags

Post Training Metrics

Search Estimators

pyspark

estimator.fit(), except for

estimators (featurizers) under pyspark.ml.feature

Not Supported

estimator.params

If a param value is also an Estimator

then params in the the wrapped estimator will also be logged, the nested

param key will be {estimator_uid}.{param_name}

  • estimator class name(e.g. “LinearRegression”)

  • fully qualified estimator class name(e.g. “pyspark.ml.regression.LinearRegression”)

pyspark ML evaluators used under Evaluator.evaluate

  • metric key format is: {metric_name}[-{call_index}]_{dataset_name}

  • Metric name: Evaluator.getMetricName()

If multiple calls are made to the same pyspark ML evaluator metric API

each subsequent call adds a “call_index” (starting from 2) to the metric key

Meta estimator:

  • Pipeline,

  • CrossValidator,

  • TrainValidationSplit,

  • OneVsRest

It logs child runs with metrics for each set of

explored parameters, as well as parameters

for the best model and the best parameters (if available)

Note

  • For spark session, it need to be defined with modified spark jar package which is aliased under ORG_FACTS_SPARK to work for autologging.

    import pyspark
    from pyspark.sql import SparkSession
    spark = (SparkSession.builder
        .config("spark.jars.packages", facts_client.ORG_FACTS_SPARK)
        .getOrCreate())
    

Note

  • In notebook setting, if post training metrics need to be logged, make sure including all calculations in one cell with estimator fit() call. Metrics calculation not part of the same cell will not be logged automatically.

  • Facts client can not find run information for other objects derived from a given prediction result (e.g. by doing some transformation on the prediction result dataset)

Keras

class FrameworkSupportKeras

Current autolog support scope for Keras.

Available Options:

Framework Name

Trigger Methods

Training Metrics

Parameters

keras

estimator.fit()

  • Training loss,

  • Validation loss,

  • User specified metrics,

Metricss related EarlyStopping callbacks:

  • stopped_epoch,

  • restored_epoch,

  • restore_best_weight,

  • last_epoch etc.

  • fit() or fit_generator() params,

  • Optimizer name,

  • Learning rate,

  • Epsilon,

Params related to EarlyStopping:

  • min-delta,

  • patience,

  • baseline,

  • restore_best_weights etc.

Tensorflow

class FrameworkSupportTensorflow

Current autolog support scope for Tensorflow.

Available Options:

Framework Name

Trigger Methods

Training Metrics

Parameters

tensorflow

estimator.fit()

  • Training loss,

  • Validation loss,

  • User specified metrics,

Metricss related EarlyStopping callbacks:

  • stopped_epoch,

  • restored_epoch,

  • restore_best_weight,

  • last_epoch etc.

TensorBoard metrics:

  • average_loss,

  • loss

Tensorflow Core:

  • tf.summary.scalar calls

  • fit() or fit_generator() params,

  • Optimizer name,

  • Learning rate,

  • Epsilon,

Params related to EarlyStopping:

  • min-delta,

  • patience,

  • baseline,

  • restore_best_weights etc.

Tensorboard params:

  • steps,

  • max_steps

XGBoost

class FrameworkSupportXGB

Current autolog support scope for XGBoost.

Available Options:

Framework Name

Trigger Methods

Training Metrics

Parameters

xgboost

xgboost.train()

  • Metrics at each iteration (if evals specified),

  • Metrics at best iteration (if early_stopping_rounds specified)

  • params specified in xgboost.train

Warning

  • scikit-learn API is not supported.

LightGBM

class FrameworkSupportLGBM

Current autolog support scope for LightGBM.

Available Options:

Framework Name

Trigger Methods

Training Metrics

Parameters

lightgbm

lightgbm.train()

  • Metrics at each iteration (if evals specified),

  • Metrics at best iteration (if early_stopping_rounds specified)

  • params specified in lightgbm.train

Warning

  • scikit-learn API is not supported.

PyTorch

class FrameworkSupportPyTorch

Current autolog support scope for PyTorch.

Available Options:

Framework Name

Trigger Methods

Training Metrics

Parameters

pytorch

pytorch_lightning.Trainer()

i.e., models that subclass pytorch_lightning.LightningModule

  • Training loss,

  • Validation loss,

  • average_test_accuracy,

  • user defined metrics,

Metricss related EarlyStopping callbacks:

  • stopped_epoch,

  • restored_epoch,

  • restore_best_weight,

  • last_epoch etc.

  • fit() parameters,

  • optimizer name,

  • learning rate,

  • epsilon,

Params related to EarlyStopping:

  • min-delta,

  • patience,

  • baseline,

  • restore_best_weights etc.

Note

  • Only PyTorch Lightning models are supported.

  • Vanilla PyTorch models that only subclass torch.nn.Module is not yet available.

Basic Utilities

Note

  • Following utlities are subject to local storage which facts client uses as an intermediary step. In case when using Watson Studio notebook environment, reloading notebooks sessions might cause exceptions to local storage thus these utilities will not function as expected but it does not impact results saved to factsheet and users would be able to see results through factsheet UI.

AIGovFactsClient.experiments

class Experiments(root_directory=None)

Bases: object

Utility to explore current experiments.

list_experiments(max_results: int = 100) pandas.core.frame.DataFrame

List all active experiments.

Returns

DataFrame object.

Return type

Pandas.DataFrame

A way you might use me is:

>>> client.experiments.list_experiments()
get_current_experiment_id()

Shows current experiment id.

Returns

str

A way you might use me is:

>>> client.experiments.get_current_experiment_id()

AIGovFactsClient.runs

class Runs(root_directory=None)

Bases: object

Utilities to explore runs within any experiment.

list_runs_by_experiment(experiment_id: str, order_by: Optional[List[str]] = None) pandas.core.frame.DataFrame

List all runs under any experiment

Parameters
  • experiment_id (str) – ID of the experiment.

  • order_by – List of order_by clauses. Currently supported values are metric.key, parameter.key, tag.key.For example, order_by=["tag.release ASC", "metric.training_score DESC"]

Returns

DataFrame object that satisfy the search expressions.

Return type

Pandas.DataFrame

A way you might use me is:

>>> client.runs.list_runs_by_experiment("1")
>>> client.runs.list_runs_by_experiment("1", order_by=["metric.training_score DESC"]))
get_current_run_id()

Shows current active run id.

Returns

str

A way you might use me is:

>>> client.runs.get_current_run_id()

External model manual log elements

AIGovFactsClient.manual_log

class ManualLog(experiment_name=None, set_as_current_exp=None, root_dir=None)

Bases: object

Enables user to trace experiments from external machine learning engines manually.

start_trace(experiment_id: Optional[str] = None, run_id: Optional[str] = None)

Start a tracing session when using manual log option. By default it uses the current experiment used in client initialization.

Parameters
  • experiment_id (str) – (Optional) ID of experiment. This will start logging session under specific experiment so runs will be under specific experiment.

  • run_id (str) – (Optional) ID of specific run. This will start session under specific run and logging will be under specific run.

A way you might use me is:

>>> client.manual_log.start_trace()
>>> client.manual_log.start_trace(experiment_id="1")
>>> client.manual_log.start_trace(run_id=run_id)
log_metric(key: str, value: float, step: Optional[int] = None) None

Log a metric against active run.

Parameters
  • key (str) – Metric name.

  • value (float) – Metric value (float).

  • step (int) – Integer training step (iteration) at which was the metric calculated. Defaults to 0.

Returns

None

A way you might use me is:

>>> client.manual_log.log_metric("mae", .77)
log_metrics(metrics: Dict[str, float], step: Optional[int] = None) None

Log multiple metrics under active run.

Parameters
  • metrics (dict) – Dictionary of metric_name: String -> value: Float.

  • step (int) – Integer training step (iteration) at which was the metric calculated. Defaults to 0.

Returns

None

A way you might use me is:

>>> client.manual_log.log_metrics({"mse": 2000.00, "rmse": 50.00})
log_param(key: str, value: Any) None

Log a param against active run.

Parameters
  • key (str) – Param name.

  • value – Param value.Value is converted to a string.

Returns

None

A way you might use me is:

>>> client.manual_log.log_param("c", 1)
log_params(params: Dict[str, Any]) None

Log multiple params under active run.

Parameters

params (dict) – Dictionary of String -> value: (String, but will be string-ified if not)

Returns

None

A way you might use me is:

>>> client.manual_log.log_params({"n_estimators": 3, "random_state": 42})
set_tag(key: str, value: Any) None

Log tag for active run.

Parameters
  • key (str) – Param name.

  • value – Param value.Value is converted to a string.

Returns

None

A way you might use me is:

>>> client.manual_log.set_tag("engineering", "ML Platform")
set_tags(tags: Dict[str, Any]) None

Log multiple tags for active run.

Parameters

tags (dict) – Dictionary of tags names: String -> value: (String, but will be string-ified if not)

Returns

None

A way you might use me is:

>>> client.manual_log.set_tags({"engineering": "ML Platform",
"release.candidate": "RC1"})
end_trace()

End a active session.

A way you might use me is:

>>> client.manual_log.end_trace()

Factsheet modify existing run elements

AIGovFactsClient.runs

class Runs(root_directory=None)

Bases: object

Utilities to explore runs within any experiment.

log_metric(run_id: str, key: str, value: float, step: Optional[int] = None) None

Log a metric against the run ID.

Parameters
  • run_id (str) – The unique id for run.

  • key (str) – Metric name.

  • value (float) – Metric value (float).

  • step (int) – Integer training step (iteration) at which was the metric calculated. Defaults to 0.

Returns

None

A way you might use me is:

>>> client.runs.log_metric(run_id, "mae", .77)
log_metrics(run_id: str, metrics: Dict[str, float], step: Optional[int] = None) None

Log multiple metrics for the given run.

Parameters
  • run_id (str) – The unique id for run.

  • metrics (dict) – Dictionary of metric_name: String -> value: Float.

  • step (int) – Integer training step (iteration) at which was the metric calculated. Defaults to 0.

Returns

None

A way you might use me is:

>>> client.runs.log_metrics(run_id, {"mse": 2000.00, "rmse": 50.00})
log_param(run_id: str, key: str, value: Any) None

Log a param against the run ID.

Parameters
  • run_id (str) – The unique id for run.

  • key (str) – Param name.

  • value – Param value.Value is converted to a string.

Returns

None

A way you might use me is:

>>> client.runs.log_param(run_id, "c", 1)
log_params(run_id, params: Dict[str, Any]) None

Log multiple params for the given run.

Parameters
  • run_id (str) – The unique id for run.

  • params (dict) – Dictionary of String -> value: (String, but will be string-ified if not)

Returns

None

A way you might use me is:

>>> client.runs.log_params(run_id, {"n_estimators": 3, "random_state": 42})
set_tags(run_id: str, tags: Dict[str, Any]) None

Log multiple tags for the given run.

Parameters
  • run_id (str) – The unique id for run.

  • tags (dict) – Dictionary of tags names: String -> value: (String, but will be string-ified if not)

Returns

None

A way you might use me is:

>>> client.runs.set_tags(run_id, {"engineering": "ML Platform",
"release.candidate": "RC1"})

Factsheet custom export elements

Note

  • Following export_payload utility to be used in cases when user needs something additional logged under any given training run. After autolog is done sending results to factsheet, user can use (see Factsheet modify existing run elements) utilities to add new metrics, params and tags as applicable to any given training run and export updated results to factsheet.

  • It is subject to local storage which facts client uses as an intermediary step. In case when using Watson Studio notebook environment, reloading notebooks sessions might cause exceptions to local storage thus these utilities will not function as expected. In cases like that, users are expected to reinitiate facts client using same experiment name, re-run training cells, update any results needed and export to factsheet.

AIGovFactsClient.export_facts

class ExportFacts(facts_service_client: ibm_aigov_facts_client.client.fact_trace.FactsClientAdapter, **kwargs)

Bases: ibm_aigov_facts_client.base_classes.auth.FactsheetServiceClientAutolog

export_payload(run_id: str, root_directory: Optional[str] = None) ibm_cloud_sdk_core.detailed_response.DetailedResponse

Export single run to factsheet service.

Parameters
  • run_id (str) – Id of run to be exported

  • root_directory (str) – (Optional) Absolute path for directory containing experiments and runs.

Returns

A DetailedResponse containing the factsheet response result

Return type

DetailedResponse

A way you might use me is:

>>> client.export_facts.export_payload(<RUN_ID>)
class ExportFactsManual(facts_service_client: ibm_aigov_facts_client.client.fact_trace.FactsClientAdapter, **kwargs)

Bases: ibm_aigov_facts_client.base_classes.auth.FactsheetServiceClientManual

export_payload_manual(run_id: str, root_directory: Optional[str] = None) ibm_cloud_sdk_core.detailed_response.DetailedResponse

Export single run to factsheet when using manual logging option. Use this option when client is initiated with enable_autolog=False and external_model=True

Parameters
  • run_id (str) – Id of run to be exported

  • root_directory (str) – (Optional) Absolute path for directory containing experiments and runs.

Returns

A DetailedResponse containing the factsheet response result

Return type

DetailedResponse

A way you might use me is:

>>> client.export_facts.export_payload_manual(<RUN_ID>)

Custom elements

Watson Machine Learning Elements

class ExportFacts(facts_service_client: ibm_aigov_facts_client.client.fact_trace.FactsClientAdapter, **kwargs)

Bases: ibm_aigov_facts_client.base_classes.auth.FactsheetServiceClientAutolog

prepare_model_meta(wml_client: object, meta_props: Dict[str, Any], experiment_name: Optional[str] = None) Dict

Add current experiment attributes to model meta properties

Parameters
  • wml_client (object) – Watson Machine learning client object.

  • meta_props (dict) – Current model meta properties.

  • experiment_name (str) – (Optional) Explicit name any experiment to be used.

Returns

A Dict containing the updated meta properties.

Return type

Dict

A way you might use me is:

>>> client.export_facts.prepare_model_meta(wml_client=<wml_client>,meta_props=<wml_model_meta_props>)

FactSheetElements

Note

Sample csv can be downloaded from here csv

class FactSheetElements

Bases: object

replace_asset_properties(api_key, csvFilePath, type_name=None, overwrite=True)

Utility to add custom asset properties of model or model entry.

Parameters
  • api_key (str) – IBM Cloud API key

  • csvFilePath (str) – File path of csv having the asset properties.

  • type_name (str) – Asset type user needs to update. Current options are modelfacts_user and model_entry_user. Default is set to modelfacts_user

  • overwrite (bool) – Merge or replace current properties. Default is to overwrite.

A way you might use me is:

>>> from ibm_aigov_facts_client import FactSheetElements
>>> client = FactSheetElements()
>>> client.replace_asset_properties("Asset_type_definition.csv")
>>> client.factsheet_utils.replace_asset_properties("Asset_type_definition.csv",type_name="model_entry_user", overwrite=False)

ExternalModelFactsElements

Note

Sample formats can be downloaded from here txt

class ExternalModelFactsElements(api_key: str, experiment_name: str)

Bases: object

save_external_model_asset(model_identifier: str, name: str, description: Optional[str] = None, schemas: Optional[ibm_aigov_facts_client.factsheet.external_modelfacts_utility.ExternalModelSchemas] = None, training_data_reference: Optional[ibm_aigov_facts_client.factsheet.external_modelfacts_utility.TrainingDataReference] = None, deployment_details: Optional[ibm_aigov_facts_client.factsheet.external_modelfacts_utility.DeploymentDetails] = None)

Save External model assets in catalog.

Parameters
  • model_identifier (str) – Identifier specific to ML providers (i.e., Azure ML service: service_id, AWS Sagemaker:model_name)

  • name (str) – Name of the model

  • description (str) – (Optional) description of the model

  • schemas (ExternalModelSchemas) – (Optional) Input and Output schema of the model

  • training_data_reference (TrainingDataReference) – (Optional) Training data schema

  • deployment_details (DeploymentDetails) – (Optional) Model deployment details

If using external models with manual log option, initiate client as:

from ibm_aigov_facts_client import AIGovFactsClient,DeploymentDetails,TrainingDataReference,ExternalModelSchemas
client= AIGovFactsClient(api_key=API_KEY,experiment_name="external",enable_autolog=False,external_model=True)

If using external models with Autolog, initiate client as:

from ibm_aigov_facts_client import AIGovFactsClient,DeploymentDetails,TrainingDataReference,ExternalModelSchemas
client= AIGovFactsClient(api_key=API_KEY,experiment_name="external",external_model=True)

Payload example by supported external providers:

Azure ML Service:

external_schemas=ExternalModelSchemas(input=input_schema,output=output_schema)
trainingdataref=TrainingDataReference(schema=training_ref)
deployment=DeploymentDetails(identifier=<service_url in Azure>,name="deploymentname",deployment_type="online",scoring_endpoint="test/score")

facts_client.external_model_facts.save_external_model_asset(model_identifier=<service_id in Azure>
                                                            ,name=<model_name>
                                                            ,deployment_details=deployment
                                                            ,schemas=external_schemas
                                                            ,training_data_reference=tdataref)

AWS Sagemaker:

external_schemas=ExternalModelSchemas(input=input_schema,output=output_schema)
trainingdataref=TrainingDataReference(schema=training_ref)
deployment=DeploymentDetails(identifier=<endpoint_name in Sagemaker>,name="deploymentname",deployment_type="online",scoring_endpoint="test/score")

facts_client.external_model_facts.save_external_model_asset(model_identifier=<model_name in Sagemaker>
                                                            ,name=<model_name>
                                                            ,deployment_details=deployment
                                                            ,schemas=external_schemas
                                                            ,training_data_reference=tdataref)

HELPER CLASSES

ExternalModelSchemas

class ExternalModelSchemas(input: List[Dict], output: Optional[List[Dict]] = None)

External model schema

Attr List[Dict] input

Model input data schema

Attr List[Dict] output

(optional) Model output data schema

TrainingDataReference

class TrainingDataReference(schema: Dict)

Training data schema definition

Attr Dict schema

Model training data schema

DeploymentDetails

class DeploymentDetails(identifier: str, name: str, deployment_type: str, scoring_endpoint: Optional[str] = None)

External model deployment details

Attr str identifier

Deployment identifier specific to providers.

Attr str name

Name of the deployment

Attr str deployment_type

Deployment type (i.e., online)

Attr str scoring_endpoint

Deployment scoring endpoint url.

MISCELLANEOUS

ENUMS

class ContainerType

Bases: object

Describes possible container types. Contains: [PROJECT,SPACE]

PROJECT = 'project'
SPACE = 'space'
class FactsheetAssetType

Bases: object

Describes possible Factsheet custom asset types. Contains: [MODEL_FACTS_USER,MODEL_ENTRY_USER]

  • The modelfacts user AssetType to capture the user defined attributes of a model

  • The model entry user asset type to capture user defined attributes of a model entry

MODEL_FACTS_USER = 'modelfacts_user'
MODEL_ENTRY_USER = 'model_entry_user'