Skip to content

Integration with a Cosmo Tech Simulator

Objective

  • Combine previous tutorials and Cosmo Tech Simulator to be able to apply changes to a simulation instance.

Prerequisites

  • You need to have completed the "Brewery" onboarding for Cosmo Tech projects.
  • You need a local version of the "Brewery" solution (full code available here).

Potential Issues

A known issue exists with graphical commands.

Reminder : Model + Project

The full simulator files can be found with the tag Complete-model on the repository.

Online view: here

Project files
MyBrewery/                           |
├─ ConceptualModel/                  |
|  └─ MyBrewery.csm.xml              | CoSML Conceptual Model
├─ Simulation/                       | Simulation instances
|  └─ Resource/                      | 
|     └─ scenariorun-data/           | Example dataset in CSV
|        └─ arc_to_Customer.csv      |
|        └─ Bar.csv                  |
|        └─ Customer.csv             |
|     └─ Brewery.ist.xml             | Model instance in XML
|     └─ CSV_Brewery.ist.xml         | Model instance using CSVs
|     └─ InstanceCalibration.ini.xml | Initialize an entity using XML
|  └─ BusinessApp_Simulation.sml.xml | CSV files -> CSV outputs
|  └─ CSV_Simulation.sml.xml         | CSV files -> graphical results
|  └─ XML_Simulation.sml.xml         | XML instantiation -> graphical results
├─ Simulator/                        | 
|  └─ Simulator.sor.xml              | CoSML Simulator
└─ project.csm                       | Information on your project

The Brewery conceptual model is very simple: it consists of a Bar entity and a Customer entity, where the Bar contains the Customer(s). The Bar can serve the Customers based on customer thirst levels and stock. It restocks when stock drops below a set restock quantity.

Customers have a Thirsty state and a Satisfaction state, which affect each other: the higher the satisfaction, the higher the chance of becoming thirsty, and the longer a customer is left thirsty, the lower the satisfaction. Satisfaction increases when a customer is served. Satisfaction is also affected by the satisfaction of surrounding customers.

For this tutorial we will write our new files in the folder MyBrewery/code/run_templates/orchestrator_tutorial_1 (this folder hierarchy will be used in future tutorials too).

Define a set of parameters to apply

In our simulation we will want to see the effects of variations on the Bar attributes.

Our existing CSV based simulations look for 3 attributes to instantiate a Bar:

  • NbWaiters: the number of waiters in our Bar
  • RestockQty: the quantity of elements to restock when getting below the threshold
  • Stock: the Bar initial stock

We will then use those 3 attributes as parameters for our simulations.

To store our parameters we will define a JSON file containing them.

code/run_templates/orchestrator_tutorial_1/parameters.json
[
  {
    "parameterId": "Stock",
    "value": 123
  },
  {
    "parameterId": "RestockQty",
    "value": 4567
  },
  {
    "parameterId": "NbWaiters",
    "value": 89
  }
]
About the JSON file format

In prevision of future use, we will define a json format close to the one returned by the command:

csm-data api scenariorun-load-data
This command will be used later to download data from the Cosmo Tech API.

Apply our parameters

Having defined our 3 parameters we can now work on a script to apply those to update a given dataset

Our script will consist of 3 steps:

  • Read the original dataset
  • Apply our parameters to the dataset
  • Write the new dataset in a given folder

We will need 3 parameters for the script:

  • The path to our original dataset
  • The path to our parameter file
  • The path where we want to write our new dataset

Using those information we can write a simple script:

code/run_templates/orchestrator_tutorial_1/apply_parameters.py
import argparse
import json
import pathlib
from csv import DictReader
from csv import DictWriter

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Parameters apply")
    parser.add_argument("input_path",
                        type=str,
                        help="A path containing our original dataset")
    parser.add_argument("output_path",
                        type=str,
                        help="A path where we will write our updated dataset")
    parser.add_argument("parameters_path",
                        type=argparse.FileType('r'),
                        help="A path to a parameters.json file")

    args = parser.parse_args()

    # Let's make a copy of our original dataset
    original_dataset_path = pathlib.Path(args.input_path)

    if not original_dataset_path.exists():
        raise FileNotFoundError(f"The folder {original_dataset_path} "
                                f"does not exists")

    dataset_files = dict()

    for _file in original_dataset_path.glob("*.csv"):
        _file_name = _file.name
        dataset_files.setdefault(_file_name, [])
        with _file.open("r") as _file_content:
            reader = DictReader(_file_content)
            for row in reader:
                dataset_files[_file_name].append(row)

    # Now that we made a memory copy of our file let's get our parameters

    with args.parameters_path as _file_parameters:
        parameters = json.load(_file_parameters)
        parameters = dict({_p['parameterId']: _p['value']
                           for _p in parameters})

    # Now we can apply our parameters to our Bar file
    if 'Bar.csv' not in dataset_files:
        raise FileNotFoundError(f"No Bar.csv could be found "
                                f"in the given input folder")

    bars = dataset_files['Bar.csv']

    for bar in bars:
        bar['Stock'] = parameters['Stock']
        bar['NbWaiters'] = parameters['NbWaiters']
        bar['RestockQty'] = parameters['RestockQty']

    # and now that our dataset got updated we can write it

    target_dataset_path = pathlib.Path(args.output_path)
    if target_dataset_path.exists() and not target_dataset_path.is_dir():
        raise FileExistsError(f"{target_dataset_path} exists "
                              f"and is not a directory")
    target_dataset_path.mkdir(parents=True, exist_ok=True)

    for _file_name, _file_content in dataset_files.items():
        _file_path = target_dataset_path / _file_name
        with _file_path.open("w") as _file:
            writer = DictWriter(_file, _file_content[0].keys())
            writer.writeheader()
            writer.writerows(_file_content)

Using that script can do the trick, we can test it :

Test run of apply_parameters.py
python code/run_templates/orchestrator_tutorial_1/apply_parameters.py Simulation/Resource/scenariorun-data code/run_templates/orchestrator_tutorial_1/scenariorun-data code/run_templates/orchestrator_tutorial_1/parameters.json
cat code/run_templates/orchestrator_tutorial_1/scenariorun-data/Bar.csv
# NbWaiters,RestockQty,Stock,id
# 89,4567,123,MyBar

We can see that having run the script our Bar.csv got correctly updated with our parameters.

Run a simulation with our updated parameters

A limitation on the CoSML language locks the folder used by a simulation to load datasets from. In the existing model the file Simulation/Resource/CSV_Brewery.ist.xml sets this folder to Simulation/Resource/scenariorun-data. We will have to use that folder to give the simulator access to our dataset.

About Simulation/Resource/scenariorun-data

The folder Simulation/Resource/scenariorun-data is a special folder: in Cosmo Tech cloud platform (where solutions are packaged as Docker containers), this folder is replaced by a symbolic link to the path /mnt/scenariorun-data which is in an environment variable called CSM_DATASET_ABSOLUTE_PATH.

We then know that the content of this folder will not be available in a container as is, and need to keep that in mind for future uses.

For simplicity, we won't be making an effort to keep the old values of possibly existing datasets and will overwrite the content instead.

It will be attained by using our apply_parameters.py on the same input and output folder (here Simulation/Resource/scenariorun-data)

A safer way would be to make a back-up of the dataset and to restore it after the run, but we won't go over this possibility in this tutorial.

To run the simulator we can either make use of csmcli, the main executable or csm-orc run-step; we will only cover the csm-orc use in this tutorial.

By writing our code in the folder code/run_templates/orchestrator_tutorial_1 we declared a Run Template named orchestrator_tutorial_1 that we can call in csm-orc commands.

The simulator run can be configured by using some options of the csm-orc run-step command:

  • --template orchestrator_tutorial_1 is necessary to target a run template (dependency for non-simulator run steps)
  • --steps engine will either look for a file engine/main.py in target run template or (if not found and the environment variable CSM_SIMULATION is set) try to run the simulator using a CoSML simulation file.

So the following command will run our CSV_Simulation defined by the Simulation/CSV_Simulation.sml.xml:

run CSV_Simulation using csm-orc
CSM_SIMULATION=CSV_Simulation csm-orc run-step --template orchestrator_tutorial_1 --steps engine

Since we did not update our dataset files in place of the original ones (only the engine step was executed) we will get our usual simulation results.

Now we can simply run both commands to update our dataset then run the updated simulation:

Apply parameters and run simulation
python code/run_templates/orchestrator_tutorial_1/apply_parameters.py Simulation/Resource/scenariorun-data Simulation/Resource/scenariorun-data code/run_templates/orchestrator_tutorial_1/parameters.json
CSM_SIMULATION=CSV_Simulation csm-orc run-step --template ochestrator_tutorial_1 --steps engine 

A different set of charts should appear this time, corresponding to our updated dataset values..

Now we can write our run.json file to run those step in a single command.

core/run_templates/orchestrator_tutorial_1/run.json
{
  "steps": [
    {
      "id": "ApplyParameters",
      "command": "python",
      "arguments": [
        "code/run_templates/orchestrator_tutorial_1/apply_parameters.py",
        "$DATASET_PATH",
        "$DATASET_PATH",
        "$PARAMETERS_PATH"
      ],
      "description": "Apply our parameters to the dataset",
      "environment": {
        "DATASET_PATH": {
          "description": "The path to the dataset to update",
          "defaultValue": "Simulation/Resource/scenariorun-data"
        },
        "PARAMETERS_PATH": {
          "description": "The path to parameters json file containing our parameters",
          "defaultValue": "code/run_templates/orchestrator_tutorial_1/parameters.json"
        }
      }
    },
    {
      "id": "SimulationRun",
      "command": "csm-orc",
      "arguments": [
        "run-step",
        "--template", "orchestrator_tutorial_1",
        "--steps", "engine"
      ],
      "description": "Runs the simulation targeted by CSM_SIMULATION",
      "useSystemEnvironment": true,
      "environment": {
        "CSM_SIMULATION": {
          "description": "The simulation file to run",
          "defaultValue": "CSV_Simulation"
        }
      },
      "precedents": [
        "ApplyParameters"
      ]
    }
  ]
}

Now that we have this run.json file we can run it:

run run.json
csm-orc run code/run_templates/orchestrator_tutorial_1/run.json

And we are done! Our first simulation based on our parameters file ran in a single command. :)

You can now do the next tutorial: "Make a Cosmo Tech Simulator cloud-ready".