Agent API (window.midas)
A JavaScript API for controlling MIDAS programmatically from AI agents, Playwright, or other external tools.
Overview
window.midas provides access to MIDAS features from the browser DevTools console or automation tools like Playwright. Use it to manage datasets, tabs, statistical models, reports, and layout.
Availability
- Project screen: All methods are available
- Launcher screen: Only
help()is available. Other methods return aNO_PROJECTerror
Project creation and opening are done through the GUI. When automating with Playwright, open a project via GUI first, then use the API.
Usage
From DevTools Console
Open the browser DevTools and call methods directly in the console.
// Check project status
const result = await window.midas.status();
console.log(result.data);
// { datasets: 3, tabs: 2, models: 1, ... }
From Playwright
const result = await page.evaluate(async () => {
return await window.midas.datasets.list();
});
console.log(result.data);
// [{ id: '...', name: 'Iris', rows: 150, columns: 5, type: 'primary' }, ...]
Viewing Help
Call help() to see a list of available methods and their signatures.
const help = window.midas.help();
console.log(help);
Response Format
All methods except help() are async and return a unified APIResult<T> response. help() is synchronous and returns a HelpInfo object directly.
// Success
{
success: true,
message: "Found 3 datasets",
data: [...]
}
// Failure
{
success: false,
message: "Dataset not found",
error: {
code: "DATASET_NOT_FOUND",
message: "No dataset with ID 'abc'",
suggestion: "Use datasets.list() to see available datasets"
}
}
A warnings field may be included when the operation succeeds but there are points to note. For example, models.run() stores data preparation warnings in the top-level warnings and model execution warnings in data.warnings.
// Example response with warnings
{
success: true,
message: "Model run completed",
warnings: ["3 rows with missing values were excluded from analysis"],
data: {
runId: '...',
warnings: ["Convergence achieved but Hessian is nearly singular"],
...
}
}
Method Reference
status()
Get the current project status.
const result = await window.midas.status();
// result.data:
// {
// datasets: 3,
// derivedDatasets: 1,
// tabs: 2,
// models: 1,
// reports: 1,
// activeDatasetId: 'ds_001',
// activeTabId: 'tab_001'
// }
project
project.save()
Save the project to browser storage. See Privacy and Security for details on storage.
await window.midas.project.save();
Returns a SANDBOX_MODE error in sandbox mode (projects where persistence is disabled, such as demos or trials).
project.exportMds()
Export the project as an MDS (MIDAS project file format) binary. The exported data is returned as an ArrayBuffer.
const result = await window.midas.project.exportMds();
// result.data: { data: ArrayBuffer, size: 12345, suggestedFilename: 'MyProject.mds' }
project.downloadMds()
Download the project as an MDS file through the browser.
const result = await window.midas.project.downloadMds();
// result.data: { filename: 'MyProject.mds' }
datasets
datasets.list()
List all datasets in the project.
const result = await window.midas.datasets.list();
// result.data: [{ id, name, rows, columns, type, parentIds? }, ...]
type is one of 'primary' (imported data), 'derived' (created by SQL or other operations), or 'ephemeral' (temporary). parentIds contains the IDs of source datasets for derived datasets.
datasets.describe(id)
Get detailed information about a dataset.
const result = await window.midas.datasets.describe('ds_001');
// result.data:
// {
// id: 'ds_001',
// name: 'Iris',
// type: 'primary',
// rowCount: 150,
// columns: [
// { id: 'col_001', name: 'sepal_length', type: 'float64', scale: 'ratio' },
// { id: 'col_002', name: 'species', type: 'string', scale: 'nominal', enumName: 'species_enum' },
// ...
// ]
// }
Each column entry contains id, name, and type. scale and enumName are optional. enumName is present when the column type is enum, indicating the associated enum definition name.
datasets.query(sql, name, options?)
Execute a SQL query and save the result as a new derived dataset. SQL follows DuckDB syntax.
const result = await window.midas.datasets.query(
'SELECT species, AVG(sepal_length) as avg_sl FROM Iris GROUP BY species',
'Species Averages'
);
// result.data: { datasetId: 'derived_...', name: 'Species Averages', rowCount: 3, columnCount: 2, overwrote: false }
Table names are automatically resolved from dataset names (case-insensitive). For dataset names containing spaces or non-ASCII characters, use double quotes in SQL (e.g., SELECT * FROM "My Dataset"). By default, if a derived dataset with the same name exists, it is updated in place. The existing dataset ID is preserved, so tabs and dependent datasets referencing that ID remain valid. Set options.overwrite to false to prevent overwriting; if a derived dataset with the same name already exists, an error is returned.
If the output name resolves to the same dataset ID as any table referenced in the SQL's FROM / JOIN clauses, or to any ancestor of those referenced tables (a dataset that one of the referenced tables was derived from), the operation would create a dependency cycle and is rejected with a SELF_REFERENCE error (e.g., query('SELECT species, COUNT(*) FROM Iris GROUP BY species', 'Iris'), or, given a chain Iris → A → B, query('SELECT * FROM B', 'A')). If the output name matches an existing primary dataset, a NAME_CONFLICT error is returned (primary datasets cannot be overwritten by derived methods). If a derived dataset with the same name was created by a different method (e.g., trying to overwrite an addColumns dataset with query), an OPERATION_TYPE_MISMATCH error is returned.
Only single SELECT statements are accepted; multiple statements separated by semicolons and DML/DDL statements are rejected. To bring external data in, use datasets.importFromURL or datasets.importFromBuffer.
datasets.importFromURL(url, options?)
Fetch a CSV/TSV file from an external URL and import it as a dataset.
const result = await window.midas.datasets.importFromURL(
'https://example.com/data.csv'
);
// result.data: { datasetId: 'ds_...', name: 'data', rowCount: 100, columnCount: 5 }
Use the name property in options to specify the dataset name after import. When omitted, the name is inferred from the URL. If a dataset with the same name already exists, the method returns a DATASET_ALREADY_EXISTS error. Set options.overwrite to true to replace the existing dataset in place (ID preserved). importFromURL / importFromBuffer can overwrite primary datasets as well (unlike derived methods, which reject such cases with NAME_CONFLICT).
Parse failures (empty data, empty header row, column count mismatch between rows, URL validation errors, invalid content type, etc.) return an INVALID_INPUT error. Network failures and timeouts return EXECUTION_ERROR.
URL validation and security restrictions apply. Only HTTP/HTTPS protocols are allowed, and access to cloud metadata endpoints is blocked. Warnings are issued for URLs not in the trusted URL list. If "Block connections to untrusted domains" is enabled in settings, untrusted URLs result in an error. See Privacy and Security for details.
datasets.importFromBuffer(data, options?)
Import CSV/TSV data from an ArrayBuffer or TypedArray (Uint8Array, Node.js Buffer, etc.) as a dataset. Use this when you want to load a local CSV from a Playwright test without spinning up an HTTP server.
// Playwright: load a local CSV via page.evaluate
import { readFileSync } from 'fs';
const csvBytes = Array.from(readFileSync('fixtures/sales.csv'));
const result = await page.evaluate(async (bytes) => {
const buffer = new Uint8Array(bytes).buffer;
return await window.midas.datasets.importFromBuffer(buffer, {
name: 'Sales',
});
}, csvBytes);
// result.data: { datasetId: 'ds_...', name: 'Sales', rowCount: 500, columnCount: 7 }
data accepts an ArrayBuffer or any ArrayBufferView (Uint8Array, DataView, Node.js Buffer, etc.). The options properties are:
name: Dataset name after import. Defaults to"Untitled".hasHeader: Whether to treat the first row as a header. Defaults totrue.encoding: Character encoding ("utf-8","shift_jis","euc-jp"). When omitted, encoding is auto-detected from the byte sequence.overwrite: Whether to replace an existing dataset with the same name. Defaults tofalse.
The delimiter is auto-detected by PapaParse, so both CSV and TSV can be passed. MIDAS automatically adds a row number column (Row#), so the returned columnCount is one more than the number of columns in the source file.
If a dataset with the same name already exists, the method returns a DATASET_ALREADY_EXISTS error. Set overwrite: true to replace the existing dataset in place (ID preserved). importFromBuffer can overwrite primary datasets as well (unlike derived methods, which reject such cases with NAME_CONFLICT). Parse failures (empty data, empty header row, column count mismatch between rows, etc.) return an INVALID_INPUT error.
datasets.addColumns(datasetId, input)
Add computed columns to a dataset. The result is created as a new derived dataset. expression follows DuckDB SQL expression syntax. SQL functions such as CASE WHEN and CAST are supported.
const result = await window.midas.datasets.addColumns('ds_001', {
columns: [
{ name: 'bmi', expression: 'weight / (height * height)' }
]
});
// result.data: { datasetId: 'derived_...', name: '...', rowCount: 150, columnCount: 6 }
Use outputName to specify the output dataset name. If a derived dataset with the same name exists, it is updated in place, preserving the existing dataset ID. If outputName resolves to the source datasetId or any of its ancestors (datasets it was derived from), a SELF_REFERENCE error is returned to prevent a dependency cycle; if it collides with an existing primary dataset, a NAME_CONFLICT error is returned. If the existing dataset was created by a different method, an OPERATION_TYPE_MISMATCH error is returned.
datasets.addOrthogonalPolynomials(datasetId, input)
Add orthogonal polynomial columns to a dataset. Used as explanatory variables in polynomial regression.
const result = await window.midas.datasets.addOrthogonalPolynomials('ds_001', {
column: 'temperature',
degree: 3
});
// result.data: { datasetId: 'derived_...', name: '...', rowCount: 150, columnCount: 8, columnNames: ['temperature_poly1', 'temperature_poly2', 'temperature_poly3'] }
Maximum degree is 30. Use outputName to specify the output dataset name. If outputName resolves to the source datasetId or any of its ancestors (datasets it was derived from), a SELF_REFERENCE error is returned to prevent a dependency cycle; if it collides with an existing primary dataset, a NAME_CONFLICT error is returned. If the existing dataset was created by a different method, an OPERATION_TYPE_MISMATCH error is returned.
datasets.setColumnSchema(datasetId, columnId, schema)
Change a column's data type, measurement scale, or enum definition.
const result = await window.midas.datasets.setColumnSchema('ds_001', 'col_002', {
type: 'enum',
scale: 'nominal',
enumName: 'species_enum'
});
// result.data: { datasetId: 'ds_001', columnId: 'col_002', createdDerived: true, derivedDatasetId: 'derived_...', overwrote: false }
schema accepts type, scale, and enumName. At least one is required. Changing the data type involves SQL type conversion, which creates a new derived dataset. If a derived dataset with the same output name already exists, it is updated in place, preserving the existing dataset ID. If outputName points to the source dataset itself or any of its ancestors (datasets it was derived from), a SELF_REFERENCE error is returned to prevent a dependency cycle; if it collides with an existing primary dataset, a NAME_CONFLICT error is returned. If the existing dataset was created by a different method, an OPERATION_TYPE_MISMATCH error is returned. Changing only the measurement scale updates metadata in place without creating a derived dataset.
When converting to enum type, all column values must be in the enum definition or NULL. If out-of-range values are present, the call is rejected with ENUM_VALUE_MISMATCH. Use Convert Column Types first to null-out or exclude unwanted values, or use enums.update to add the missing values to the enum definition.
enums
enums.create(name, values)
Create an enum definition. Up to 50 values can be specified.
const result = await window.midas.enums.create('color', ['red', 'green', 'blue']);
// result.data: { name: 'color', valueCount: 3 }
enums.list()
List all enum definitions.
const result = await window.midas.enums.list();
// result.data: [{ name: 'color', values: ['red', 'green', 'blue'] }, ...]
enums.update(name, values)
Update the values of an existing enum definition.
await window.midas.enums.update('color', ['red', 'green', 'blue', 'yellow']);
Up to 50 values can be specified. Removing values is rejected with ENUM_VALUE_MISMATCH if any dataset still contains the removed values in a column of this enum type. This preserves the invariant that enum column values are always in the definition or NULL. Use Convert Column Types first to null-out or exclude those values, or keep the values in the enum definition.
enums.remove(name)
Remove an enum definition. A warning is returned if columns still reference this enum.
await window.midas.enums.remove('color');
tabs
tabs.list()
List all open tabs.
const result = await window.midas.tabs.list();
// result.data: [{ id, type, title }, ...]
tabs.open(config)
Open a new tab.
// Open Graph Builder
const result = await window.midas.tabs.open({
type: 'graph-builder',
title: 'My Graph',
datasetId: 'ds_001'
});
// result.data: { tabId: 'tab_...', type: 'graph-builder', title: 'My Graph' }
// Open SQL Editor
const result2 = await window.midas.tabs.open({
type: 'sql-editor',
initialQuery: 'SELECT * FROM Iris LIMIT 10',
initialOutputName: 'Preview'
});
Available tab types:
| Type | Description |
|---|---|
graph-builder | Graph Builder |
sql-editor | SQL Editor |
glm | GLM |
glmm | GLMM |
random-forest | Random Forest |
linear-regression | Linear Regression |
pca | PCA |
statistics | Descriptive Statistics |
crosstab | Crosstab |
anova | ANOVA |
kaplan-meier | Kaplan-Meier |
cox-regression | Cox Regression |
doe-analysis | DOE Analysis |
data-table | Data Table |
report | Report (requires reportId) |
computed-column | Computed Column |
dummy-coding | Dummy Coding |
orthogonal-polynomials | Orthogonal Polynomials |
reshape | Reshape |
column-type-conversion | Type Conversion |
enum-definition | Enum Definition |
project-overview | Project Overview |
project-lineage | Data Lineage |
selected-rows | Selected Rows |
excluded-rows | Excluded Rows |
filtered-data | Filtered Data |
model-detail | Model Detail |
glm-diagnostics | GLM Diagnostics |
glm-prediction | GLM Prediction |
sql-query-viewer | SQL Query Viewer |
project-diff | Project Diff |
help | Help |
tabs.close(id)
Close a tab.
await window.midas.tabs.close('tab_001');
tabs.closeOthers(keepTabId)
Close all tabs except the specified one.
const result = await window.midas.tabs.closeOthers('tab_001');
// result.data: { closedCount: 3 }
tabs.getGraphBuilder(tabId)
Get Graph Builder tab configuration.
const result = await window.midas.tabs.getGraphBuilder('tab_001');
// result.data: { tabId, graphType, datasetId, config, aspectRatio }
tabs.addGraphLayer(tabId, layer)
Add a layer to a custom graph. Only works when graphType is 'custom'.
const result = await window.midas.tabs.addGraphLayer('tab_001', {
geom: { type: 'point' },
aes: { x: 'sepal_length', y: 'sepal_width', color: 'species' }
});
// result.data: { layerIndex: 0 }
Aesthetic mappings (aes) accept column names or column IDs. Column names are resolved case-insensitively. Available properties are x, y, color, fill, stroke, size, shape, alpha, linetype, ymin, ymax, and group. Not all properties apply to every geom type — for example, Point and Line do not support fill. To use a fixed color, specify { fixedColor: '#FF0000' }. For a fixed size, use a number. When stats is omitted, identity is used by default.
See Custom Graph Reference for the list of geom, stat, and position types. Each Statistic's params are documented there with their TypeScript types and default values.
tabs.updateGraphLayer(tabId, layerIndex, layer)
Partially update an existing layer.
await window.midas.tabs.updateGraphLayer('tab_001', 0, {
geom: { type: 'line' }
});
tabs.removeGraphLayer(tabId, layerIndex)
Remove a layer.
await window.midas.tabs.removeGraphLayer('tab_001', 0);
tabs.moveToPane(tabId, toPaneId)
Move a tab to a different pane. Use a pane ID returned by layout.split().
await window.midas.tabs.moveToPane('tab_001', 'pane_002');
tabs.setDataset(tabId, datasetId)
Switch a tab's dataset.
await window.midas.tabs.setDataset('tab_001', 'ds_002');
tabs.configureGraph(tabId, config)
Configure a Graph Builder tab in one call. Set graph type, dataset, layers, aspect ratio, and more. Column names are resolved case-insensitively.
await window.midas.tabs.configureGraph('tab_001', {
graphType: 'custom',
datasetId: 'ds_001',
layers: [
{ geom: { type: 'point' }, aes: { x: 'weight', y: 'height', color: 'group' } }
],
aspectRatio: '4:3'
});
models
models.list()
List trained models.
const result = await window.midas.models.list();
// result.data: [{ id, type, name, datasetId, family }, ...]
models.run(config)
Run a GLM. Columns can be specified by name (case-insensitive). The result is returned directly without opening a tab. To persist the model for later use with models.list() and models.describe(), call models.save() with the returned runId. Unsaved run results are kept in memory (up to 20) and are lost on page reload.
Currently only type: 'glm' is supported.
const result = await window.midas.models.run({
type: 'glm',
datasetId: 'ds_001',
yColumn: 'sepal_length',
xColumns: ['sepal_width', 'petal_length'],
family: 'gaussian'
});
// result.data:
// {
// runId: '...',
// coefficients: [
// { variable: '(Intercept)', estimate: 2.25, se: 1.02, testStatistic: 2.21, p: 0.029, ciLower: 0.23, ciUpper: 4.27 },
// ...
// ],
// inference: { distribution: 't', df: 147 },
// fit: { deviance: 42.3, nullDeviance: 234.7, aic: 183.94, bic: 193.47, iterations: 5, converged: true },
// diagnosticSummary: { nObservations: 150, nIncomplete: 0, degreesOfFreedom: 147, dispersionParameter: 0.29 },
// warnings: []
// }
The testStatistic field in each coefficient is the Wald test statistic estimate / se. inference identifies the reference distribution used for testStatistic, p, ciLower, and ciUpper. distribution: 't' indicates a t distribution with df degrees of freedom; distribution: 'normal' indicates a standard normal distribution, in which case df is null.
The family-by-family mapping is shown below. β̂/SE follows t(n−p) exactly only for Gaussian with the identity link. Other dispersion-estimating families apply a t distribution as a small-sample convention, while families marked asymptotic use the standard normal approximation.
| family | link | Reference distribution | Nature |
|---|---|---|---|
gaussian | identity | t(n−p) | Exact |
gaussian | non-identity | t(n−p) | Small-sample convention |
gamma | any | t(n−p) | Small-sample convention |
negative-binomial | any (fixed θ) | t(n−p) | Small-sample convention |
poisson | any | Standard normal | Asymptotic |
binomial | any | Standard normal | Asymptotic |
negative-binomial | any (estimated θ) | Standard normal | Asymptotic |
ciLower and ciUpper are the 95% Wald confidence interval estimate ± criticalValue × se for each coefficient. The critical value is the 0.975 quantile of the reference distribution reported in inference.
The response may include two kinds of warnings. Top-level result.warnings contains data preparation warnings such as exclusion of rows with missing values. result.data.warnings contains model execution warnings such as convergence issues. Rows with missing values in any response or explanatory variable are excluded from analysis. The number of excluded rows is available in diagnosticSummary.nIncomplete.
Specify family as 'gaussian' (default), 'binomial', 'poisson', 'gamma', or 'negative-binomial'. Use link to set the link function. When omitted, the default link for the family is used.
| family | Default link | Available links |
|---|---|---|
gaussian | identity | identity, log |
binomial | logit | logit, probit |
poisson | log | log, identity |
gamma | inverse | inverse, log, identity |
negative-binomial | log | log |
Optional parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
includeIntercept | boolean | true | Include intercept term |
maxIterations | number | 25 | Maximum number of iterations |
tolerance | number | 1e-8 | Convergence tolerance |
binomialResponse | object | - | Binomial response format. See below |
theta | number | - | Overdispersion parameter for negative binomial. When omitted, estimated via profile likelihood |
offsetColumn | string | - | Offset column (e.g., exposure for Poisson regression) |
binomialResponse specification:
Specifies the response variable format when family: 'binomial'. When binomialResponse is omitted, binary format is assumed.
{ format: 'binary' }-- Binary 0/1 data. Specify the response variable withyColumn{ format: 'grouped', successesColumn: '...', trialsColumn: '...' }-- Successes/trials pair.yColumncan be omitted in this case
// Grouped Binomial example
const result = await window.midas.models.run({
type: 'glm',
datasetId: 'ds_001',
binomialResponse: { format: 'grouped', successesColumn: 'defects', trialsColumn: 'inspected' },
xColumns: ['temperature', 'pressure'],
family: 'binomial',
link: 'logit'
});
models.save(runId, name?)
Save a model run result to the project. After saving, the model is available via models.list() and models.describe().
const run = await window.midas.models.run({ ... });
const saved = await window.midas.models.save(run.data.runId, 'My Model');
// saved.data: { modelId: '...', name: 'My Model' }
A diagnostic dataset is created on demand when you open the GLM Diagnostics tab. Key columns include fitted_values, deviance_residuals, pearson_residuals, standardized_residuals, leverage, and cooks_distance. Visualize these with reports.addGraph() or Graph Builder for residual analysis and diagnostic plots.
models.describe(id)
Get model details. Supports GLM, GLMM, and Random Forest. The response structure varies by model type (check result.data.type). GLMM and Random Forest models are created through the GUI; use describe() to retrieve their results via the API.
GLM returns coefficients, fit statistics (AIC, BIC, deviance), diagnostic summary, and metadata.
GLMM returns fixed effects (same format as GLM coefficients), random effects (group variable, variance, ICC, BLUP), fit statistics (AIC, BIC, deviance, log-likelihood), and diagnostic summary.
Random Forest returns task type (classification/regression), hyperparameters, and feature importances (if available).
const result = await window.midas.models.describe('model_001');
// GLM example - result.data:
// {
// type: 'glm',
// id: 'model_001',
// name: 'My Model',
// metadata: {
// createdAt: '2025-01-15T10:30:00Z',
// trainingDatasetId: 'ds_001',
// predictors: ['sepal_width', 'petal_length'],
// response: 'sepal_length',
// sampleSize: 150
// },
// coefficients: [
// { variable: '(Intercept)', estimate: 2.25, se: 1.02, testStatistic: 2.21, p: 0.029, ciLower: 0.23, ciUpper: 4.27 },
// { variable: 'sepal_width', estimate: 0.60, se: 0.24, testStatistic: 2.50, p: 0.014, ciLower: 0.13, ciUpper: 1.07 },
// ...
// ],
// inference: { distribution: 't', df: 147 },
// fit: { deviance: 42.3, nullDeviance: 234.7, aic: 183.94, bic: 193.47, iterations: 5, converged: true },
// diagnosticSummary: { ... }
// }
For GLMM, the coefficients under fixedEffects use the same ModelCoefficientInfo shape and inference is always { distribution: 'normal', df: null }. Random Forest does not include an inference field.
// GLMM example - result.data:
// {
// type: 'glmm',
// id: 'model_002',
// name: 'Mixed Model',
// metadata: { createdAt: '2025-01-15T10:30:00Z', trainingDatasetId: 'ds_001', predictors: ['x1'], response: 'y', sampleSize: 200 },
// fixedEffects: [
// { variable: '(Intercept)', estimate: 3.14, se: 0.85, testStatistic: 3.69, p: 0.0002, ciLower: 1.47, ciUpper: 4.81 },
// { variable: 'x1', estimate: 0.52, se: 0.18, testStatistic: 2.89, p: 0.004, ciLower: 0.17, ciUpper: 0.87 }
// ],
// inference: { distribution: 'normal', df: null },
// randomEffects: {
// groupColumn: 'school',
// variance: 1.23,
// residualVariance: 4.56,
// icc: 0.212,
// blup: [{ groupId: 'A', value: 0.45 }, { groupId: 'B', value: -0.32 }]
// },
// fit: { deviance: 892.1, aic: 902.1, bic: 915.3, logLikelihood: -447.05, iterations: 12, converged: true },
// diagnosticSummary: { nObservations: 200, nGroups: 10, nFixedEffects: 2, degreesOfFreedom: 198, nIncomplete: 0, groupSizes: { min: 15, max: 25, mean: 20 } }
// }
models.configure(tabId, config)
Configure a GLM tab. Set family, link function, response variable, and explanatory variables. Column names are resolved case-insensitively.
await window.midas.models.configure('glm_001', {
family: 'binomial',
link: 'logit',
yColumn: 'outcome',
xColumns: ['age', 'treatment'],
});
reports
reports.create(name, description?)
Create a new report.
const result = await window.midas.reports.create('Analysis Report');
// result.data: { reportId: 'report_...', name: 'Analysis Report' }
reports.list()
List reports.
const result = await window.midas.reports.list();
// result.data: [{ id, name, elementCount }, ...]
reports.getContent(reportId)
Get report content.
const result = await window.midas.reports.getContent('report_001');
// result.data: { content: '## Analysis Results\n...', elements: [{ id, type, title }, ...] }
reports.setContent(reportId, content)
Replace all text content of the report. Text added by addContent() or addModelSummary() is also replaced.
const result = await window.midas.reports.setContent('report_001', '## Updated Results\n...');
// result.data: { contentLength: 42 }
reports.addContent(reportId, markdown)
Append Markdown text to the end of a report.
await window.midas.reports.addContent('report_001', '## Analysis Results\n\nThe model shows...');
reports.addModelSummary(reportId, modelId)
Add a model summary to a report. Supports GLM, GLMM, Linear Regression, and Random Forest.
All model types use the element-reference scheme (the same as addGraph). For GLM, GLMM, and Linear Regression, the coefficient table is added as a DataTableElement under report.elements, and a {{data_table:elementId}} reference is inserted into the report content. For Random Forest, Feature Importance is added as a DataTableElement (2-column table: Feature, Importance, sorted by importance descending) when featureImportances is available. The underlying DerivedDataSet is registered in project.datasets, so it also appears in the Data tab listing. When called multiple times for the same model, the coefficients DerivedDataSet is reused when an existing DerivedDataSet has both the same name and an equal operation. If an existing DerivedDataSet has the same name but a different operation (e.g. after changing fit conditions), the call returns APIResult.success === false with a Dataset with name "X" already exists error. To record summaries for multiple fit configurations, either delete the previous model or save the new model under a different name before calling this method. Report elements (DataTableElement, ModelStatsElement, etc.) and text are added anew each time. Deleting a model automatically removes associated coefficient DerivedDataSets and prunes any report element that referenced the deleted datasets or model — DataTableElement, ModelStatsElement, GraphBuilderElement, CrosstabElement, CrossDatasetComparisonElement, and StatisticsSummaryElement. When calling reports.addModelSummary(), the modelId argument must refer to a saved model; otherwise the call returns APIResult.success === false.
For GLM, GLMM, and Linear Regression, the Model Fit / Random Effects / OLS Fit summary is registered as a ModelStatsElement (type: 'model_stats') under report.elements, and a {{model_stats:elementId}} reference is inserted into the report content. The element stores only the model id and resolves values from project.models[modelId] at render time, so when the same model id is retrained and project.models is overwritten, existing reports automatically reflect the new values. If the model is deleted after the report is created, ModelStatsElement renders a "Model not found" placeholder.
GLM and GLMM coefficient tables share the same columns: Variable, Estimate, Std. Error, Test Statistic, Distribution, DF, P-value, Lower 95%, Upper 95%. Linear Regression coefficient tables additionally include Std. Coef. and VIF. The Distribution column indicates the reference distribution used for the test statistic and p-value (t or normal), and the DF column holds the degrees of freedom when the reference distribution is t (null for normal). For GLM, the reference distribution is determined by family and link (see the table under models.run()). For GLMM fixed effects, MIDAS always uses the asymptotic standard normal distribution (normal) as an implementation choice. Linear Regression always uses the t distribution. Confidence intervals are Wald-type (estimate ± criticalValue × SE, where the critical value is the 0.975 quantile of the distribution indicated by Distribution/DF).
What each model type renders:
- GLM: ModelStatsElement renders a Model Fit section with AIC, BIC, Deviance, Null Deviance, Converged, and iterations.
- GLMM: ModelStatsElement renders a Random Effects section with Group Variable, Number of Groups, Random Intercept Variance, Residual Variance (LMM only), and ICC; and a Model Fit section with AIC, BIC, Deviance, Log-Likelihood, Converged, and iterations. For LMM (Gaussian + identity link), labels read AIC (REML), BIC (REML), REML Log-Likelihood, and ICC; for other GLMMs, they read AIC, BIC, Log-Likelihood (Laplace), and ICC (latent scale).
- Linear Regression: Five elements are registered — coefficient table, ANOVA Type I, ANOVA Type III, Prediction Intervals (per-observation prediction / confidence intervals), and a ModelStatsElement. The ModelStatsElement renders an OLS Fit section with R², Adjusted R², F-statistic, F p-value, RMSE, and N observations, plus an Information Criteria section with AIC and BIC. Converged / iterations are not shown because they are trivial for OLS.
- Random Forest: A Feature Importance
DataTableElement(2-column table: Feature, Importance, sorted by importance descending) whenfeatureImportancesis available, and aModelStatsElementrendering a Model Configuration section (Task Type, Number of Estimators, Max Depth, Min Samples Split, Min Samples Leaf, Max Features) and OOB Score.
const result = await window.midas.reports.addModelSummary('report_001', 'model_001');
// result.data: { reportId, addedText, elementId?, statsElementId?, anovaTypeIElementId?, anovaTypeIIIElementId?, predictionIntervalsElementId? }
// elementId and statsElementId are returned for GLM / GLMM / Linear Regression / Random Forest
// - elementId: id of the coefficient or Feature Importance DataTableElement (RF: only when featureImportances is non-empty)
// - statsElementId: id of the ModelStatsElement that renders Model Fit / Random Effects / OLS Fit / Model Configuration
// anovaTypeIElementId / anovaTypeIIIElementId / predictionIntervalsElementId are Linear Regression only
reports.addGraph(reportId, config)
Add a graph to a report as a report element. Creates a Custom Graph without opening a tab. Column names are resolved case-insensitively.
const result = await window.midas.reports.addGraph('report_001', {
datasetId: 'ds_001',
layers: [
{ geom: { type: 'point' }, aes: { x: 'weight', y: 'height' } }
],
title: 'Weight vs Height',
height: 500,
});
// result.data: { elementId, reportId }
The graph is stored as a report element and a {{graph_builder:elementId}} reference is appended to the report content. The height parameter sets the graph height in pixels and defaults to 400. Minimum value is 200, maximum is 5000.
aspectRatio accepts '16:9', '4:3', '1:1', '3:4', '9:16', or 'custom'. When a preset value other than 'custom' is specified, the aspect ratio determines the displayed height and height is not used for rendering. When 'custom' is specified, height sets the height.
layout
layout.split(config)
Split a pane to create a new area.
const result = await window.midas.layout.split({
tabId: 'tab_001',
direction: 'horizontal' // 'horizontal' or 'vertical'
});
// result.data: { newPaneId: 'pane_...', originalPaneId: 'pane_...' }
Use the returned newPaneId with tabs.moveToPane() to place tabs in the new pane.
Error Codes
| Code | Description |
|---|---|
ERROR | General error |
NO_PROJECT | No project is loaded |
NOT_FOUND | Specified resource not found |
DATASET_NOT_FOUND | No dataset matching the table name |
COLUMN_NOT_FOUND | Column not found |
INVALID_TAB_TYPE | Invalid tab type |
INVALID_TAB_TYPE_FOR_OPERATION | Tab type does not support this operation |
INVALID_GRAPH_TYPE | Layer operation attempted on non-custom graph |
INVALID_INPUT | Invalid input parameter |
INDEX_OUT_OF_RANGE | Layer index out of range |
DATASET_ALREADY_EXISTS | Dataset with the same name already exists (when overwrite is false) |
SELF_REFERENCE | The dataset being overwritten is a dependency (ancestor) of the operation |
NAME_CONFLICT | Output name of a derived method collides with an existing primary dataset |
OPERATION_TYPE_MISMATCH | The existing derived dataset was created by a different method (operation type) |
AMBIGUOUS_TABLE_NAME | Multiple datasets match the table name case-insensitively |
EXECUTION_ERROR | SQL execution error |
UNSUPPORTED_MODEL_TYPE | Unsupported model type |
MODEL_EXECUTION_ERROR | Model execution error |
NUMERICAL_ERROR | Numerical computation error (e.g., matrix singularity) |
INSUFFICIENT_DATA | Insufficient valid observations |
NO_DATA | Dataset has no data loaded |
NO_CONTAINER | No active container |
NO_TARGET | No table reference in SQL |
NO_CONFIG | No Graph Builder configuration |
SPLIT_FAILED | Pane split failed |
SANDBOX_MODE | Cannot save in sandbox mode |
ENUM_ALREADY_EXISTS | Enum definition already exists |
ENUM_NOT_FOUND | Enum definition not found |
ENUM_IN_USE | Enum definition is referenced by columns |
Reference
- Live reference: Run
window.midas.help()in the project screen