Managing the dashboard data server and display server
Use of a deployed dashboard depends on a running data server or display server. You start, stop, and manage these servers with the following executables, which are found in the bin directory of your Apama installation:
dashboard_server
display_server
dashboard_management
Prerequisites
In order to start a data server or display server with all the necessary parameters to support a given deployment, you may need to obtain the following information from the Dashboard Builder:
The logical name for each correlator as well as the host name and port for each deployment correlator (if any) that was specified by the Dashboard Builder in the Apama tab of the Tools > Options dialog prior to the generation of the deployment package. See Changing correlator definitions for deployment.
The port of the Apama dashboard data server must be accessible to the Apama Dashboard Viewer. If you are on a Windows system and the firewall is enabled, unblock network access for this port. The default value for the port is 3278. For security reasons, never change firewall settings such that this port is exposed to untrusted clients.
Command-line options for the data server and display server
The executable for the data server is dashboard_server and the executable for the display server is display_server. They can be found in the bin directory of the Apama installation.
Synopsis
To start the data server, run the following command:
dashboard_server [ options ]
To start the display server, run the following command:
display_server [ options ]
When you run these commands with the -h option, the usage message for the corresponding command is shown.
Start the display server from the dashboards directory of your Apama work directory. To do so, proceed as follows:
You can also start display server with the --dashboardDir folder option to specify a folder for your deployed dashboards. When --dashboard folder is used, display server will be looking for the deployed dashboards from the specified folder.
Description
The dashboard_server and display_server executables can be run without arguments, in which case they start a server on port 3278 (for a data server) or 3279 (for a display server) on the local host. You can specify a different port with the -d or --dataPort option.
You can enable logging with the -f and -v (or --logfile and --loglevel) options or with the log4j properties file.
Options
Both the dashboard_server and display_server executables take the following options:
Option
Description
-A | --sendAllData
Dashboard data server only. Send all data over the socket regardless of whether or not it has been updated.
-a bool | --authUsers bool
Specifies whether to enable user authentication. bool is one of true and false. By default, authentication is enabled. Set --authUsers to false for web deployments for which authentication is performed by the web layer.
Sets the correlator host and port for a specified logical correlator name. raw-channel is one of true and false, and specifies whether to use the raw channel for communication. This overrides the host, port, and raw-channel setting specified by the Dashboard Builder for the given correlator logical name; see Changing correlator definitions for deployment. This option can occur multiple times in a single command. For example:
These options set the host and port for the logical names default and work1.
-d port | --dataPort port
Data server or display server port to which viewers (for local deployments) or the data servlet (for web deployments) will connect in order to receive event data. If not specified, the default port (3278 for data servers and 3279 for display servers) is used.
-E | --purgeOnEdit bool
Specifies whether to purge all trend data when an instance is edited. bool is one of true and false. If this option is not specified, all trend data is purged when an instance is edited. In most cases this is the desired mode of operation.
-F arg | --filterInstance arg
Exclude instances which are not owned by user. This option applies to all dashboard processes. Default is false for the builder and true for the other dashboard processes. Exception: when the Dashboard Viewer is connecting to a dashboard server, the default is true and cannot be overridden.
-f file | --logfile file
Full pathname of the file in which to record logging. If this option is not specified, the options in the log4j properties file are used. See also Text internationalization in the logs.
-G file | --trendConfigFile file
Trend configuration file for controlling trend-data caching.
-h | --help
Emit usage information and then exit.
-J file | --jaasFile file
Full pathname of the JAAS initialization file to be used by the data server or display server. If not specified, the server uses the file JAAS.ini in the lib directory of your Apama installation.
-L file | --xmlSource file
XML data source file. If file contains static data, append :0 to the file name. This signals Apama to read the file only once.
-m mode | --connectMode mode
Correlator-connect mode. mode is one of always and asNeeded. If always is specified all correlators are connected to at startup. If asNeeded is specified, the data server or display server connects to correlators as needed. If this option is not specified, the server connects to correlators as needed.
Sets the host and port for a specified logical data server name. This overrides the host and port specified by the Dashboard Builder for the given server logical name. This option can occur multiple times in a single command. See Working with multiple data servers for more information.
-O file | --optionsFile file
Full path of OPTIONS.ini
-P n | --maxPrecision n
Maximum number of decimal places to use in numerical values displayed by dashboards. Specify values between 0 and 10, or -1 to disable truncation of decimal places. A typical value for n is 2 or 4, which eliminates long floating point values (for example, 2.2584435234). Truncation is disabled by default.
-p port | --port port
Port on which this data server or display server will listen for management operations. This is the port used for communication between the server and the dashboard_management process.
-Q size | --queueLimit size
Set the server output queue size to size. This changes the default queue size for each client that is connected to the server.
-q options | --sql options
Configures SQL Data Source access. options has the following form:
retry:: Specify the interval (in milliseconds) to retry connecting to a database after an attempt to connect fails. Default is -1, which disables this feature.
fail: Specify the number of consecutive failed SQL queries after which to close this database connection and attempt to reconnect. Default is -1, which disables this feature.
db: Specify the logical name of the database as specified in the builder’s SQL options.
noinfo: Query database for available tables and columns in your database. If a Database Repository file is found, it is used to populate drop down menus in the Attach to SQL Data dialog.
nopererr: SQL errors with the word “permission” in them will not be printed to the console. This is helpful if you have selected the Use Client Credentials option for a database. In this case, if your login does not allow access for some data in their display, you will not see any errors.
quote: Encloses all table and column names specified in the Attach to SQL Data dialog in quotes when an SQL query is run. This is useful when attaching to databases that support quoted case-sensitive table and column names. If a case-sensitive table or column name is used in the Filter field, or you are entering an advanced query in the SQL Query field, they must be entered in quotes, even if the -sqlquote option is specified.
-R bool | --purgeOnRemove bool
Specifies whether to purge all instance data when an instance is removed. bool is one of true and false. If this option is not specified, all instance data is purged when an instance is removed.
-r bool | --cacheUsers bool
Specifies whether to cache and reuse user authorization information. bool is one of true and false. Specifying true can improve performance, because users are authorized only once (per data server or display server session) for a particular type of access to particular instance.
-s | --ssl
Dashboard data server only. Enable secure sockets for client communication. When secure sockets are enabled, the data server will encrypt data transmitted to Dashboard Viewers. Encryption is done using the strongest cipher available to both the data server and Viewer. SSL certificates are not supported. The display server does not support this option.
-T depth | --maxTrend depth
Maximum depth for trend data, that is, the maximum number of events in trend tables. If this option is not specified, the maximum trend depth is 1000. Note that the higher you set this value, the more memory the data server or display server requires, and the more time it requires in order to display trend and stock charts.
-t bool | --cacheAuthorizations bool
Cache and reuse instance authorizations. Caching authorizations is enabled by default. When caching is enabled, authorization checks are performed only once per user for each DataView they access. Disabling caching allows the current state of the DataView to be used in the authorization check, but can degrade performance.
-u rate | --updateRate rate
Data update rate in milliseconds. This is the rate at which the data server or display server pushes new data to deployed dashboards in order to inform them of new events received from the correlator. rate should be no lower than 250. If the Dashboard Viewer is utilizing too much CPU, you can lower the update rate by specifying a higher value. If this option is not specified, an update rate of 500 milliseconds is used.
-V | --version
Emit program name and version number and then exit.
-v level | --loglevel level
Logging verbosity. level is one of FATAL, ERROR, WARN, INFO, DEBUG, and TRACE. If this option is not specified, the options in the log4j properties file will be used.
-X file | --extensionFile file
Full pathname of the extensions initialization file to be used by the data server or display server. If not specified, the server uses the file EXTENSIONS.ini in the lib directory of your Apama installation.
Add an index for the specified SQL-based instance table with the specified compound key. table-name is the name of a DataView. key-list is a comma-separated list of variable names or field names. If the specified DataView exists in multiple correlators that are connected to the dashboard server, the index is added to each corresponding data table. Example:
--queryIndex Products_Table:prod_id,vend_id
You can only add one index per table, but you can specify this option multiple times in a single command line in order to index multiple tables.
Default time zone for interpreting and displaying dates. zone is either a Java time zone ID or a custom ID such as GMT-8:00. Unrecognized IDs are treated as GMT. See Time zone ID values for the complete listing of permissible values for zone.
--inclusionFilter value
Set instance inclusion filters. Use this option to control scenario (for example, DataView) discovery. If not specified, all scenarios that have output fields will be discovered and kept in the memory of the dashboard processes, which can be expensive. For example, to include only the DV_Weather DataView, specify --inclusionFilter DV_Weather. The value can be a comma-separated list of scenario IDs. If you specify an inclusion filter, any specified exclusion filters are ignored.
--exclusionFilter value
Set instance exclusion filters. Use this option to exclude specific scenarios (for example, DataViews) from being kept in the memory of the dashboard processes. If neither exclusion filters nor inclusion filters are specified, all scenarios that have output fields will be discovered and kept in the memory of the dashboard processes, which can be expensive. The value can be a comma-separated list of scenario IDs. If an inclusion filter is specified, any exclusion filters are ignored.
--dashboardDir folder
Set the directory where display_server will be using to look for the deployed dashboards. If not specified, then display_server must be started from %APAMA_WORK%\dashboards directory in order for it to locate the deployed dashboards.
--dashboardExtraJars jarFiles
A semi-colon separated list of jar files for custom functions, custom commands or any other 3rd party jars (e.g. JDBC jar). If not specified, then the environment variable APAMA_DASHBOARD_CLASSPATH must be defined prior to running the dashboard processes. Each entry in jarFiles can be an absolute path of the jar file, or when --dashboardDir option is used, relative to the folder argument.
--namedServerMode
Dashboard data server only. Specify this option when you start a data server that is used as a named server by a display-server deployment. See Working with multiple data servers for more information.
--pidfile file
Specifies the name of the file that contains the process ID. This file is created at process startup and can be used, for example, to externally monitor or terminate the process. It is recommended that the file name includes the port number to distinguish different servers. For example, correlator-3278.pid for a data server and correlator-3279.pid for a display server.
Rotating the log files of the data server and display server
Rotating a log file refers to closing a log file being used by a running data server or display server and opening a new log file to be used instead from that point onwards. This lets you archive log files and avoid log files that are too large to easily view.
Each site should decide on and implement its own log rotation policy. You should consider the following:
How often to rotate log files.
How large a log file for a data server or display server can be.
What log file naming conventions to use to organize log files.
There is a lot of useful header information in the log file being used when the data server or display server starts. If you need to provide log files to Apama technical support, you should be able to provide the log file that was in use when the data server or display server started, as well as any other log files that were in use before and when a problem occurred.
Info
Regularly rotating log files and storing the old ones in a secure location may be important as part of your personal data protection policy. For more information, see Protecting and erasing data from Apama log files.
Logging for data servers and display servers is configured using standard Log4j configuration files. You can find them in the etc directory of your Apama installation:
log4j-dashboard-server.xml is the configuration file for the data server.
log4j-display-server.xml is the configuration file for the display server.
By default, the above files configure the servers to rotate the log files when they reach a certain file size. If you want to enable time-based rotation instead (for example, to rotate the log files on a monthly basis), see the Log4j 2 documentation at https://logging.apache.org/log4j/2.x/.
There are many external resources which can be found online regarding how to configure Log4j for different purposes. In the case of more advanced configurations, you may consider consulting these.
Info
Some people use the term “log rolling” instead of “log rotation”.
Controlling the update frequency
The correlator sends update events to the data server, display server, or any clients using the Scenario Service API (see also Scenario Service API) for all scenarios with output variables (for example, DataViews, including MemoryStore tables that expose DataViews using the exposeMemoryView or exposePersistentView schema options). These updates are sent whenever the values of fields or output variables in your scenarios change. If you have scenarios that update frequently, you might need to reduce the frequency of update events sent by the correlator.
You can adjust the settings per scenario definition or globally. The global value is used where a given scenario definition has no specific setting. The per-definition values always take precedence over the global values.
A ConfigureUpdates event consists of the following:
scenarioId (type string): May be the empty string to modify the global values, or a definition’s scenarioId.
Configuration (type dictionary (string, string)): Configuration key and values. Key can be one of:
sendThrottled (type boolean): Whether to send throttled updates (on the scenarioId.Data channel). The default is true.
sendRaw (type boolean): Whether to send every update (on the scenarioId.Data.Raw channel). The default is true.
throttlePeriod (type float): Throttle period in seconds. A zero value indicates no throttling. The default is 0.0.
routeUpdates (type boolean): Whether to route Update events for the benefit of EPL files running in the correlator. The default is false.
sendThrottledUser: (boolean): Whether to send throttled updates on a per-user channel. The default is false.
sendRawUser (type boolean): Whether to send raw updates on a per-user channel. The default is false.
Those with a User suffix are suitable for using with only custom clients that use ScenarioServiceConfig.setUsernameFilter() on their service configuration.
The above examples configure DV_scenario1 and DV_scenario2 to send raw updates; DV_scenario3 to use a throttle period of 1 second; and all other scenarios to not send raw updates, and to use a throttle period of 0.1 seconds.
Earlier releases used the com.apama.scenario.SetThrottlingPeriod(x) event. Note that the use of the ConfigureUpdates events allows greater flexibility than the SetThrottlingPeriod event (which only controlled sending of throttled updates for all scenarios).
The use of com.apama.scenario.SetThrottlingPeriod(x) should be replaced with:
Note that by default, routeUpdates is false, so any EPL that relies on Update (and other scenario control events) to be routed should route a ConfigureUpdates event for the scenarioIds it is interested in to route Updates.
The latest values are always used — thus it is not advisable for a client to send an event requesting (for example) raw updates and then undo this when it disconnects, as that will affect other clients. The recommendation is that the administrator should configure settings at initialization time.
Runtime performance of scenarios can be improved by setting sendRaw and routeUpdates to false and throttlePeriod to a non-zero value. In this case, the cost of an update is reduced (as the Update events are only generated when needed, and if throttling, they are only needed at most once every throttlePeriod).
Configuring Trend-Data Caching
By default, dashboard servers (data servers and display servers) collect trend data for all numeric output variables of DataViews running in their associated correlators. This data is cached in preparation for the possibility that it will be displayed as historical data in a trend chart when a dashboard starts up. Without the cache, trend charts would initially be empty, with new data points displaying as time elapses.
Advanced users can override the default caching behavior on a given server, and control caching in order to reduce memory consumption on that server, or in order to cache variables that are not cached by default, such as non-numeric variables.
Important:
In many cases, Server performance can be improved by overriding the default caching behavior, and suppressing the caching of those output variables for which trend-chart historical data is not required.
Caching trend data for string variables is very costly in terms of memory consumption.
You control caching with a trend configuration file, which allows you to specify the following:
Individual variables to cache
Classes of variables to cache
Default caching rules
Trend depths (number of data points to maintain) for each DataView
You do not need to provide a trend configuration file. If you provide no trend configuration file, dashboard servers use the default caching behavior described above.
Trend charts can include variables whose trend data is not cached, but they will display no historical (pre-dashboard-startup) data for those variables.
When a data server or display server starts, it uses the trend configuration file specified with the -G option, if supplied. Otherwise it uses the file trend.xml in the dashboards directory of your Apama work directory, if there is one. (Note that Apama provides an example trend configuration file, APAMA_HOME\etc\dashboard_onDemandTrend.xml, that you can copy to APAMA_WORK\dashboards\trend.xml as a basis for a trend configuration file.) Otherwise, it uses the default caching behavior described above.
For DV_scenario1 in all correlators, cache trend data for variables A and B with a maximum trend depth of 5000.
For all other queries, cache all numeric output variables with a maximum trend depth of 10,000.
For DV_dataview1 in correlator production, cache variables A and B with a maximum trend depth of 5000.
For all other DataViews, cache no trend data.
In general, a trend configuration file is an XML file that includes of one or more item elements with the following attributes:
type:DATAVIEW
correlator: Logical name of correlator. Use * for if the item applies to all correlators
name: DataView ID. Use * if the item applies to all DataViews.
vars: Class of variables to cache trend data for. Specify one of the following:
LIST: Cache the individual variables that are listed in var sub-elements.
ALL: Cache all input and output variables.
ALL_OUTPUT: Cache all output variables.
ALL_NUMERIC_OUTPUT: Cache all numeric output variables.
depth: Maximum depth of trend data to cache.
If the vars attribute of an item element is LIST, the element has zero or more var sub-elements. Each var element has single attribute, name, which specifies the name of a DataView field.
The item elements are nested in a trend element, which is nested within a config element.
If a particular DataView on a given correlator matches multiple item elements in a server’s trend configuration file, the server chooses the best-matchingitem and caches the variables specified in that item. Following are the ways, in order from best to worst, in which an item can match a DataView on a given correlator:
Fully resolved: Exact match for both correlator name and DataView name
Wildcard correlator: Wildcard correlator and exact match for DataView name
Wildcard DataView: Exact match for correlator name and wildcard DataView
Fully wildcarded: Wildcard correlator and wildcard DataView
If there are multiple best matches, the last match is used.
Consider, for example, scenarios named DV_scenario1 and DV_scenario2, correlators named production and development, and the following item elements:
Whenever a dashboard connects to or disconnects from a data server or display server, the server sends a special notification event to all connected correlators that include the Dashboard Support bundle.
userName specifies the user name with which the dashboard was logged in to the server.
sessionID is a unique identifier for the dashboard’s session with the server.
extraParams may be used in a future release.
Note that the circumstances under which a dashboard disconnects from a server include but are not limited to the following:
End user exits the Dashboard Viewer or web browser in which a dashboard is loaded.
End user exits a web browser tab in which a dashboard is loaded.
Network failure causes loss of connectivity to viewer or web browser in which a dashboard is loaded.
Note also that disconnect notification might be sent only after a timeout period rather than immediately upon loss of connection.
Follow these steps to manage connect and disconnect notification:
Ensure that the Dashboard Support bundle is loaded into all relevant correlators.
Use EPLs to process DashboardClientConnected and DashboardClientDisconnected events. Base processing on the values of the userName and sessionId fields.
Working with multiple data servers
Deployed dashboards have a unique associated default data server or display server. For web-based deployments, this default is specified with the following properties of the dashboard_deploy.xml configuration file (see Generating a deployment package from the command line).
apama.displayserver.port
apama.displayserver.host
apama.displayserver.refresh
apama.displayserver.hiddenmenuitems
For Viewer deployments, it is specified upon Viewer startup. By default, the data-handling involved in attachments and commands is handled by the default server, but advanced users can associate non-default data servers with specific attachments and commands. This provides additional scalability by allowing loads to be distributed among multiple servers. This is particularly useful for display server deployments. By deploying one or more data servers behind a display server, the labor of display building can be separated from the labor of data handling. The display server can be dedicated to building displays, while the overhead of data handling is offloaded to data servers.
Apama supports the following multiserver configurations:
The Attach to Apama and Define… Command dialogs (except Define System Command) include a Data Server field that can be set to a data server’s logical name. To associate a logical name with the data server at a given host and port, developers use the Data Server tab in the General tab group of the Application Options dialog (select ToolsOptions in Builder).
The logical data server names specified in the Builder’s Application Options dialog are recorded in the file OPTIONS.ini, and the deployment wizard incorporates this information into deployments. You can override these logical name definitions with the --namedServername:host:port option to the Builder, Viewer, data server or display server executable. Below is an example. This is a sequence of command-line options which should appear on a single line as part of the command to start the executable:
Here Server1, Server2 and Server3 are the server logical names.
Builder with multiple data servers
Builder maintains connections with the data servers named in attachments and commands. Note that it connects directly to the correlator (dotted lines in the figure below) in order to populate dialogs with metadata. Correlator event data is handled by the data servers.
You can override the logical server names specified in the Application Options dialog with the --namedServername:host:port option to the Builder executable. Below is an example. This is a sequence of command-line options which should appear on a single line as part of the command to start the executable:
Here Server1, Server2 and Server3 are the server logical names.
Viewer with multiple data servers
Viewer maintains connections with the data servers named in attachments and commands of opened dashboards.
In the data server Login dialog (which appears upon Viewer startup), end users enter the host and port of the default data server (or accept the default field values). If all attachments and commands use named data servers, end users can check the Only using named data server connections check box and omit specification of a default server.
The logical data server names specified in the Builder’s Application Options dialog are recorded in the file OPTIONS.ini, which is found in the deployed .war file along with dashboard .rtv files. You can override these logical name definitions with the --namedServername:host:port option to the Viewer executable. Below is an example. This is a sequence of command-line options which should appear on a single line as part of the command to start the executable:
Here Server1, Server2 and Server3 are the server logical names.
Display server deployments with multiple data servers
The display server maintains connections with the data servers named in attachments and commands of its client dashboards.
Info
Important:
In a display server deployment, each named data server must be started with the --namedServerMode option.
The logical data server names specified in the Builder’s Application Options dialog are recorded in the file OPTIONS.ini, which is used by the Deployment Wizard to define deployment logical names. You can override these logical name definitions with the --namedServer name:host:port option to the display server executable. Below is an example. This is a sequence of command-line options which should appear on a single line as part of the command to start the executable:
Here Server1, Server2 and Server3 are the server logical names.
Managing and stopping the data server and display server
The dashboard_management tool is used to stop a data server or display server and perform certain data server or display server management operations. The executable for this tool is located in the bin directory of the Apama installation. Running the tool in the Apama Command Prompt or using the apama_env wrapper (see Setting up the environment using the Apama Command Prompt) ensures that the environment variables are set correctly.
To manage and stop the data server or display server, run the following command:
dashboard_management [ options ]
When you run this command with the –h option, the usage message for this command is shown.
Description
You can use this tool to shut down, deep ping, or get the process ID of a data server or display server on a specified host and port. A successful deep ping verifies that the server is responding to requests. You can also use this tool to generate a dashboard deployment package, and to sign.jar files as part of deployment-package generation.
When you invoke this tool, you can specify the host and port of the server you want to manage. For the port, specify the port that was specified with the -p or --port option when the desired server was started. If the -p or --port option was not specified, you do not need to supply this option. It defaults to the default management port (28888).
Options
The dashboard_management tool takes the following options:
Option
Description
-a alias | –alias alias
Use the alias in order to sign the .jar files to be included in the deployment package specified by the -y or --deploy option and the -c or --config option. Specify the keystore and password with the -k or --keystoreFile option and the -o or --password option.
-c path | –config path
Generate a deployment package for the named configuration. Specify the file that defines the configuration with the -y or --deploy option. Specify the .rtv files to use with the -r or --rtvPath option.
-D | –displayServer
Run against display server.
-d | –deepping
Deep-ping the component.
-e password | –encryptString password
Generate an encrypted version of the password. This is useful when you manually add an SQL data source by entering information directly into OPTIONS.ini.
-f file | –logfile file
Full pathname of the file in which to record logging.
-h | –help
Display usage information.
-I | –invalidateAll
Invalidate all user authentications.
-i username | –invalidateUser username
Invalidate a user authentication.
-j jarfile | –jar jarfile
Name of a third-party jar file to sign. You can specify this option multiple times if you have multiple jar files to sign.
-k path | –keystoreFile path
Use the keystore file designated by path in order to sign the .jar files to be included in the deployment package specified by the -y or --deploy option and the -c or --config option. Specify the alias and password with -a or --alias option and the -o or --password option. Ensure that the environment variable JAVA_HOME is set to a Java Development Kit (JDK).
-l path | –deployLocation path
The deploy destination.
-n host | –hostname host
Connect to component on host. If not specified, localhost is used.
-o password | –password password
Use the specified password in order to sign the .jar files to be included in the deployment package specified by the -y or --deploy option and the -c or --config option. Specify the keystore and alias with the -k or --keystoreFile option and the -a or --alias option.\
-p port | –port port
Connect to component on port. Specify the port that was specified with the -p or --port option when the component was started. If the -p or --port option was not specified, you do not need to supply this option. It defaults to the default management port (28888).
-r path | –rtvPath path
Generate a deployment package with the .rtv files in the directory designated by path. Specify the deployment configuration to use with the -y or --deploy option and the -r or --rtvPath option.
-s reason | –shutdown reason
Shut down the component with the specified reason.
-U path | –update path
Update the specified Release 2.4 .rtv file or files so that they are appropriate for use with this Apama release. path is the pathname of a file or directory. If path specifies a directory, all .rtv files in the directory are updated.
-v | –verbose
Emit verbose output, including the startup settings (such as dataPort and updateRate) of the dashboard server to connect to.
-V | –version
Display program name and version number and then exit.
-W \ –waitFor
Wait for component to be available.
-y path | –deploy path
Generate a deployment package for a configuration defined in the dashboard configuration file designated by path. Specify the configuration name with the -c or --config option. Specify the .rtv files to use with the -r or --rtvPath option.