You can use docplexcloud to run an OPL model remotely, getting the data in json
format, from standard dat
files, or from Excel spreadsheets.
The differences with running locally with a CPLEX Optimization Studio installation are in the input and output: on docplexcloud, only tupleSets and single tuples are accepted as valid data channels.
Each and every tuple and tupleSet declared after the subject to{}
block is considered part of the solution to the problem and will be persisted in the same format as the input.
That is, if you submit an Excel file as input, the output will be an Excel file, whereas if you submit a json
file, the output will be a json
file.
See the documentation here
If your existing models has inputs that are not tupleSets or single tuples, here are some tricks to easily migrate your model.
Let’s take an example to illustrate how to proceed:
will run with the following dat
file
This example won’t run as-is on docplexcloud because input1
, input2
, input3
are neither tuples nor tupleSets.
We could rewrite the inputs and adapt the model but this can be a tedious task.
I prefer to add intermediate inputs which will be okay with docplexcloud format and then initialize the previous inputs (input1, input2, input3
) with comprehensions. This way, the optimization model stays as-is and fewer lines of code need to be written, which is less error-prone.
Mapping a multi dimensional array on a tupleSet can seem hard to execute sometimes, but OPL has a powerful (not well-known) syntax, called generic indexed arrays which will help a lot.
Adapting the inputs is in fact very simple:
Then the dat
file will become:
With this approach, it is very easy and automatic to convert a .mod
to docplexcloud.
In most cases, the overhead introduced by this intermediate layer is minimal in terms of memory and time, so the model can stay as-is.
The post Tips to migrate an existing OPL model to docplexcloud appeared first on IBM Decision Optimization: on Cloud, for Bluemix….