Sink types
Note: Types are just Scala classes that translates on JSON directly, for example, for class
case class Buzz(x: Int, y: Map[String, String])valid JSON representation could be:{ "x": 120, "y": {"a": 1} }. AlsoOptiontypes represent optional parameters.
Generic parameters:
rowSchema(JSON object) is the row schema for sink (keys are incidents fields, values are table columns/JSON object keys). All fields are required and have a string type unless other specified.fromTsField- the column name for the starting timestamp of an incidenttoTsField- the column name for the ending timestamp of an incidentunitIdField- the column name for the unit ID of an event (the value is taken from source)appIdFieldVal(string and integer) - the column name and value for the incident type (for example, to distinguish different incident categories).patternIdField
Example
{
"toTsField": "to_ts",
"fromTsField": "from_ts",
"unitIdField": "engine_id",
"appIdFieldVal": ["rule_type", 1],
"patternIdField": "rule_id",
"subunitIdField": "physical_id",
"incidentIdField": "uuid"
}
JDBC sink
jdbcUrl,driverName,userName,password- the same as in JDBC sourcetableName(string) is the name of an SQL table to store incidents.batchInterval(optional, integer) is the size of a single batch to write (in rows)
Example
(no rowSchema is given, see above)
{
"jdbcUrl": "jdbc:clickhouse://default:@127.0.0.1:8123/mydb",
"tableName": "engine_events",
"driverName": "com.clickhouse.jdbc.ClickHouseDriver"
}
Kafka sink
brokerandtopic- the same of Kafka sink (note: for historical reasons, it is calledbrokerin singular in sink)serializer(string, default"json") is the serializer for incident messages.
Example
(no rowSchema is given, see above)
{
"broker": "10.83.0.3:9092",
"topic": "engine_events"
}