Drivers Heka



  1. Drivers Heaven Gmod
  2. Drivers Headrest
  3. Drivers Health Card
  4. Drivers Headset
  5. Drivers Head

MTK USB All v1.0.4 is a small application for Windows Computer, which allows you to install the Mediatek Drivers on your Computer.

We at ‘Heka Support Services’ (HSS) are known for delivering the best available business solutions to our global partners by leveraging India’s advantage of a low cost and high-intellectual capital destination. With expertise in customized client centered solutions, cost-effective services, ISO 9001:2008 certified quality procedures, rich. When used with HEKA’s fully computer controlled amplifiers PATCHMASTER comes with pre-defined protocols which automate the major steps needed to establish the desired recording configuration. This gives the user room to draw his full attention to tasks such as approaching the cell and forming a giga-seal whereas adjustments on the amplifier.

Here on this page, we have managed to share the official and tested version of MTK USB ALL Driver, i.e., MTK USB All v1.0.4.

MTK USB All v1.0.4

MTK USB All Driver helps you to install all the Mediatek Drivers on your Windows Computer. It can be handy as it supports SP Flash Tool, SPMDT Tool, SN Writer Tool, and all the Mediatek Smartphone and Tablet.

File Name: MTK_USB_All_v1.0.4.zip
Alternative Name: MTK USB All v1.0.4
File Size: 73.1 MB
How to Download: See Example
Compatible with: Windows Computer

If this does not work, try the following procedure after the installation of Clarity: After connecting the dongle, Windows will detect a new Plug and Play device and the Found New Hardware Wizard will appear. Select 'Search for a suitable driver for my device.Select 'Specify a location' and then select the C: CLARITY BinHWDRIVERSROCKEY folder. Downloads: DataApex has expertise in developing and manufacturing chromatography data stations. DataApex products are sold to over 40 countries around the world and five chromatography instruments manufacturers resell privately labeled versions of DataApex software. Download FTDI USB Serial Port Driver 2.12.16.0 Windows 10 (Other Drivers & Tools). If you already have the driver installed and want to update to a newer version got to 'Let me pick from a list of device drivers on my computer' 6. Click 'Have Disk' 7. Browse to the folder where you extracted the driver. Download windows 10 Colibrick uses the USB port of your PC and thus is directly powered from it. The unit can be connected or disconnected anytime without the necessity to switch the computer off. The input connector is compatible with the internal INT7/9 boards and U-PAD/2 devices.

Notes:

[*] Latest MTK USB ALL Driver: If you are looking for the latest MTK USB ALL Driver, then head over to our Download Section.

[*] SP Flash Tool: If you are looking for the SP Flash tool, then head over to SP flash Tool for Windows or SP Flash Tool for Linux page.

[*] SPMDT Tool: If you are looking for the SPMDT Tool, then head over to SPMDT Tool page.

[*] SN Write Tool: If you are looking for the SN Write Tool to Write or Change IMEI on the Mediatek Devices, then head over to the SN Write Tool page.

[*] Hot-linking Not Allowed: If you are willing to share the above driver with your friends, or on any website, forums, then use the page URL. Do not use the direct file link, as it will be re-directed to the homepage of this website.

A brand new Heka installation is something of a blank canvas, full of promisebut not actually interesting on its own. One of the challenges with a highlyflexible tool like Heka is that newcomers can easily become overwhelmed by thewide assortment of features and options, making it difficult to understandexactly how to begin. This document will try to address this issue by takingreaders through the process of configuring a hekad installation thatdemonstrates a number of Heka’s common use cases, hopefully providing enoughcontext that users will be able to then adjust and extend the given examplesto meet their own particular needs.

When we’re done our configuration will have Heka performing the followingtasks:

  • Accepting data from a statsd client over UDP.
  • Forwarding aggregated statsd data on to both a Graphite Carbon server and anInfluxDB server.
  • Generating a real time graph of a specific set of statsd statistics.
  • Loading and parsing a rotating stream of nginx access log files.
  • Generating JSON structures representing each request loaded from the Nginxlog files and sending them on to an ElasticSearch database cluster.
  • Generating a real time graph of the HTTP response status codes of therequests that were recorded in the nginx access logs.
  • Performing basic algorithmic anomaly detection on HTTP status code data,sending notification messages via email when such events occur.

But before we dig in to that, let’s make sure everything is working by tryingout a very simple setup.

Simplest Heka Config¶

One of the simplest Heka configurations possible is one that loads a singlefile from the local file system and then outputs the contents of that file tostdout. The following is an example of such a configuration:

Heka is configured via one or more TOMLformat configuration files, each of which is comprised of one or moresections. The configuration above consists of three sections, the first ofwhich specifies a LogstreamerInput, Heka’s primary mechanism for loading filesfrom the local file system. This one is loading /var/log/auth.log, but youcan change this to load any other file by editing the log_directory settingto point to the folder where the file lives and the file_match setting to aregular expression that uniquely matches the filename. Note the single quotes(‘auth.log’) around the regular expression; this is TOML’s way ofspecifying a raw string, which means we don’t need to escape the regularexpression’s backslashes like we would with a regular string enclosed bydouble quotes (“auth.log”).

In most real world cases a LogstreamerInput would include a decoder setting,which would parse the contents of the file to extract data from the textformat and map them onto a Heka message schema. In this case, however, westick with the default behavior, where Heka creates a new message for eachline in the log file, storing the text of the log line as the payload of theHeka message.

The next two sections tell Heka what to do with the messages that theLogstreamerInput is generating. The LogOutput simply writes data out to theHeka process’s stdout. We set message_matcher = “TRUE” to specify that thisoutput should capture every single message that flows through the Hekapipeline. The encoder setting tells Heka to use the PayloadEncoder thatwe’ve configured, which extracts the payload from each captured message anduses that as the raw data that the output will send.

To see whether or not you have a functional Heka system, you can create a filecalled sanity_check.toml and paste in the above configuration, adjusting theLogstreamerInput’s settings to point to another file if necessary. Then youcan run Heka using hekad -config=/path/to/sanity_check.toml, and you shouldsee the contents of the log file printed out to the console. If any new linesare written to the log file that you’re loading, Heka will notice and willwrite them out to stdout in real time.

Note that the LogstreamerInput keeps track of how far it has gotten in aparticular file, so if you stop Heka using ctrl-c and then restart it youwill not see the same data. Heka stores the current location in a“seekjournal” file, at /var/cache/hekad/logstreamer/LogstreamerInput bydefault. If you delete this file and then restart Heka you should see it loadthe entire file from the beginning again.

Congratulations! You’ve now successfully run Heka with a full, workingconfiguration. But clearly there are much simpler tools to use if all you wantto do is write the contents of a log file out to stdout. Now that we’ve got aninitial success under our belt, let’s take a deeper dive into a much morecomplex Heka configuration that actually handles multiple real world usecases.

Global Configuration¶

As mentioned above, Heka is configured using TOML configuration files. Mostsections of the TOML configuration contain information relevant to one ofHeka’s plugins, but there is one section entitled hekad which allows you totweak a number of Heka’s global configuration options. In many cases the defaults for most of theseoptions will suffice, and your configuration won’t need a hekad section atall. A few of the options are worth looking at here, however:

  • maxprocs (int, default 1):

    This setting corresponds to Go’s GOMAXPROCS environment variable. Itspecifies how many CPU cores the hekad process will be allowed to use. Thebest choice for this setting depends on a number of factors such as thevolume of data Heka will be processing, the number of cores on the machineon which Heka is running, and what other tasks the machine will beperforming. For dedicated Heka aggregator machines, this should usually beequal to the number of cpu cores available, or perhaps number of coresminus one, while for Heka processes running on otherwise busy boxes one ortwo is probably a better choice.

  • base_dir (string, default ‘/var/cache/hekad’ or ‘c:varcachehekad’):

    In addition to the location of the configuration files, there are twodirectories that are important to a running hekad process. The first ofthese is called the base_dir, which is a working directory where Hekawill be storing information crucial to its functioning, such asseekjournal files to track current location in a log stream, or sandboxfilter aggregation data that is meant to survive between Heka restarts. Itis of course important that the user under which the hekad process isrunning has write access to the base_dir.

  • share_dir (string, default ‘/usr/share/heka’ or ‘c:usrshareheka’):

    The second directory important to Heka’s functioning is called theshare_dir. This is a place where Heka expects to find certain staticresources that it needs, such as the HTML/javascript source code used bythe dashboard output, or the source code to various Lua based plugins. Theuser owning the hekad process requires read access to this folder, butshould not have write access.

Drivers Heaven Gmod

It’s worth noting that while Heka defaults to expecting to find certainresources in the base_dir and/or the share_dir folders, it is nearlyalways possible to override the location of a particular resource on a case bycase basis in the plugin configuration. For instance, the filename option ina SandboxFilter specifies the filesystem path to the Lua source code for thatfilter. If it is specified as a relative path, the path will be computedrelative to the share_dir. If it is specified as an absolute path, theabsolute path will be honored.

For our example, we’re going to keep the defaults for most global options,but we’ll bump the maxprocs setting from 1 to 2 so we can get at leastsome parallel behavior:

Accepting Statsd Data¶

Once we’ve got Heka’s global settings configured, we’re ready to start on theplugins. The first thing we’ll tackle is getting Heka set up to accept datafrom statsd clients. This involves two different plugins, aStatsd Input that accepts network connections and parses thereceived stats data, and a Stat Accumulator Input that will accept thedata gathered by the StatsdInput, perform the necessary aggregation, andperiodically generate ‘statmetric’ messages containing the aggregated data.

The configuration for these plugins is quite simple:

These two TOML sections tell Heka that it should include a StatsdInput and aStatAccumInput. The StatsdInput uses the default value for every configurationsetting, while the StatAccumInput overrides the defaults for two of itssettings. The ticker_interval = 1 setting means that the statmetric messageswill be generated once every second instead of the default of once every fiveseconds, while the emit_in_fields = true setting means that the aggregatedstats data will be embedded in the dynamic fields of the generated statmetricmessages, in addition to the default of embedding the graphite text format inthe message payload.

This probably seems pretty straightforward, but there are actually somesubtleties hidden in there that are important to point out. First, it’s notimmediately obvious, but there is an explicit connection between the twoplugins. The StatsdInput has a stat_accum_name setting, which we didn’t needto set because it defaults to ‘StatAccumInput’. The following configuration isexactly equivalent:

The next subtlety to note is that we’ve used a common piece of Heka configshorthand by embedding both the name and the type in the TOML sectionheader. Heka lets you do this as a convenience if you don’t need to use a namethat is separate from the type. This doesn’t have to be the case, it’spossible to give a plugin a different name, expressing the type inside theTOML section instead of in its header:

The config above is ever so slightly different from the original two, becauseour plugins now have different name identifiers, but functionally the behavioris identical to the prior versions. Being able to separate a plugin name fromits type is important in cases where you want more than one instance of thesame plugin type. For instance, you’d use the following configuration if youwanted to have a second StatsdInput listening on port 8126 in addition to thedefault on port 8125:

We don’t need two StatsdInputs for our example, however, so for simplicitywe’ll go with the most concise configuration.

Forwarding Aggregated Stats Data¶

Collecting stats alone doesn’t actually provide much value, we want to be ableto actually see the data that has been gathered. Statsd servers are typicallyused to aggregate incoming statistics and then periodically deliver the totalsto an upstream time series database, usually Graphite, although InfluxDB is rapidly growing in popularity. For Heka to replacea standalone statsd server it needs to be able to do the same.

To understand how this will work, we need to step back a bit to look at howHeka handles message routing. First, data enters the Heka pipeline through aninput plugin. Then it needs to be converted from its original raw format intoa message object that Heka knows how to work with. Usually this is done with adecoder plugin, although in the statsd example above instead theStatAccumInput itself is periodically generating statmetric messages.

After the data has been marshaled into one (or more) message(s), the messageis handed to Heka’s internal message router. The message router will theniterate through all of the registered filter and output plugins to see whichones would like to process the message. Each filter and output provides amessage matcher to specify which messages it wouldlike to receive. The router hands each message to each message matcher, and ifthere’s a match then the matcher in turn hands the message to the plugin.

To return to our example, we’ll start by setting up aCarbon Output plugin that knows how to deliver messages to anupstream Graphite Carbon server. We’ll configure it to receive the statmetric messagesgenerated by the StatAccumInput:

Any messages that pass through the router with a Type field equal toheka.statmetric (which is what the StatAccumOutput emits by default) will behanded to this output, which will in turn deliver it over UDP to the specifiedcarbon server address. This is simple, but it’s a fundamental concept. Nearlyall communication within Heka happens using Heka message objects being passedthrough the message router and being matched against the registered matchers.

Okay, so that gets us talking to Graphite. What about InfluxDB? InfluxDB hasan extension that allows it to support the graphite format, so we could usethat and just set up a second CarbonOutput:

A couple of things to note here. First, don’t get confused by the type =“CarbonOutput”, which is specifying the type of the plugin we areconfiguring, and the “Type” in message_matcher = “Type ‘heka.statmetric’”, which is referring to the Type field of the messagesthat are passing through the Heka router. They’re both called “type”, butother than that they are unrelated.

Second, you’ll see that it’s fine to have more than one output (and/or filter,for that matter) plugin with identical message_matcher settings. The routerdoesn’t care, it will happily give the same message to both of them, and anyothers that happen to match.

This will work, but it’d be nice to just use the InfluxDB native HTTPAPI. For this, we can instead use our handy HttpOutput:

The HttpOutput configuration above will also capture statmetric messages, andwill then deliver the data over HTTP to the specified address where InfluxDBis listening. But wait! what’s all that statmetric-influx-encoder stuff?I’m glad you asked..

Encoder Plugins¶

We’ve already briefly mentioned how, on the way in, raw data needs to beconverted into a standard message format that Heka’s router, filters, andoutputs are able to process. Similarly, on the way out, data must be extractedfrom the standard message format and serialized into whatever format isrequired by the destination. This is typically achieved through the use ofencoder plugins, which take Heka messages as input and generate as output rawbytes that an output plugin can send over the wire. The CarbonOutput doesn’tspecify an encoder because it assumes that the Graphite data will be in themessage payload, where the StatAccumInput puts it, but most outputs need anencoder to be specified so they know how to generate their data stream fromthe messages that are received.

In the InfluxDB example above, you can see that we’ve defined astatmetric_influx_encoder, of type SandboxEncoder. A “Sandbox” plugin is onewhere the core logic of the plugin is implemented in Lua and is run in aprotected sandbox. Heka has support for Sandbox Decoder,Sandbox Filter, and Sandbox Encoder plugins. Inthis instance, we’re using a SandboxEncoder implementation provided by Hekathat knows how to extract data from the fields of a heka.statmetric messageand use that data to generate JSON in a format that will be understood byInfluxDB (see StatMetric InfluxDB Encoder).

This separation of concerns between encoder and output plugins allows for agreat deal of flexibility. It’s easy to write your own SandboxEncoder pluginsto generate any format needed, allowing the same HttpOutput implementation canbe used for multiple HTTP-based back ends, rather than needing a separateoutput plugin for each service. Also, the same encoder can be used withdifferent outputs. If, for instance, we wanted to write the InfluxDB formatteddata to a file system file for later processing, we could use thestatmetric_influx encoder with a FileOutput to do so.

Real Time Stats Graph¶

While both Graphite and InfluxDB provide mechanisms for displaying graphs ofthe stats data they receive, Heka is also able to provide graphs of this datadirectly. These graphs will be updated in real time, as the data is flowingthrough Heka, without the latency of the data store driven graphs. The followingconfig snippet shows how this is done:

There’s a lot going on in just a short bit of configuration here, so let’sconsider it one piece at a time to understand what’s happening. First, we’vegot a stat_graph config section, which is telling Heka to start up aSandboxFilter plugin, a filter plugin with the processing code implemented inLua. The filename option points to a filter implementation that ships withHeka. This filter implementation knows howto extract data from statmetric messages and store that data in a circularbuffer data structure. The preserve_data option tells Heka that the allglobal data in this filter (the circular buffer data, in this case) should beflushed out to disk if Heka is shut down, so it can be reloaded again whenHeka is restarted. And the ticker_interval option is specifying that ourfilter will be emitting an output message containing the cbuf data back intothe router once every second. This message can then be consumed by otherfilters and/or outputs, such as our DashboardOutput which will use it togenerate graphs (see next section).

After that we have a stat_graph.config section. This isn’t specifying a newplugin, this is nested configuration, a subsection of the outer stat_graphsection. (Note that the section nesting is specified by the use of thestat_graph. prefix in the section name; the indentation helps readability,but has no impact on the semantics of the configuration.) The stat-graphsection configures the SandboxFilter and tells it what Lua source code to use,the stat_graph.config section is passed in to the Lua source code forfurther customization of the filter’s behavior.

So what is contained in this nested configuration? The first two options,num_rows and secs_per_row, are configuring the circular buffer datastructure that the filter will use to store the stats data. It can be helpfulto think of circular buffer data structures as a spreadsheet. Our spreadsheetwill have 300 rows, and each row will represent one second of accumulateddata, so at any given time we will be holding five minutes worth of stats datain our filter. The next two options, stats and stat_labels, tell Hekawhich statistics we want to graph and provide shorter labels for use in thegraph legend. Finally the preservation_version section allows us to versionour data structures. This is needed because our data structures might change.If you let this filter run for a while, gathering data, and then shut downHeka, the 300 rows of circular buffer data will be written to disk. If youthen change the num_rows setting and try to restart Heka the filter willfail to start, because the 300 row size of the preserved data won’t match thenew size that you’ve specified. In this case you would increment thepreservation_version value from 0 to 1, which will tell Heka that thepreserved data is no longer valid and the data structures should be createdanew.

Heka Dashboard¶

At this point it’s useful to notice that, while the SandboxFilter gathers thedata that we’re interested in and packages it up an a format that’s useful forgraphing, it doesn’t actually do any graphing. Instead, it periodicallycreates a message of type heka.sandbox-output, containing the currentcircular buffer data, and injects that message back into Heka’s messagerouter. This is where the Dashboard Output that we’ve configuredcomes in.

Heka’s DashboardOutput is configured by default to listen forheka.sandbox-output messages (along with a few other message types, whichwe’ll ignore for now). When it receives a sandbox output message, it willexamine the contents of the message, and if the message contains circularbuffer data it will automatically generate a real time graph of that data.

By default, the dashboard UI is available by pointing a web browser at port4352 of the machine where Heka is running. The first page you’ll see is theHealth report, which provides an overview of the plugins that are configured,along with some information about how messages are flowing through the Hekapipeline:

.. and scrolling further down the page ..

In the page header is a Sandboxes link, which will take you to a listing ofall of the running SandboxFilter plugins, along with a list of the outputsthey emit. Clicking on this we can see our stat_graph filter and the Statscircular buffer (“CBUF”) output:

If you click on the filter name stat_graph, you’ll see a page showingdetailed information about the performance of that plugin, including how manymessages have been processed, the average amount of time a message matchertakes to match a message, the average amount of time spent processing amessage, and more:

Finally, clicking on the Stats link will take us to the actual renderedoutput, a line graph that updates in real time, showing the values of thespecific counter stats that we have specified in our stat_graphSandboxFilter configuration:

Other stats can be added to this graph by adjusting the stats andstat_labels values for our existing stat_graph filter config, although ifwe do so we’ll have to bump the preservation_version to tell Heka that theprevious data structures are no longer valid. You can create multiple graphsby including additional SandboxFilter sections using the same stat_graph.luasource code.

It also should be mentioned that, while the stat_graph.lua filter we’ve beenusing only emits a single output graph, it is certainly possible for a singlefilter to generate multiple graphs. It’s also possible for SandboxFilters toemit other types of output, such as raw JSON data, which the DashboardOutputwill happily serve as raw text. This can be very useful for generating ad-hocAPI endpoints based on the data that Heka is processing. Dig in to ourSandbox documentation to learn more about writing your own Lua filtersusing our Sandbox API.

Loading and Parsing Nginx Log Files¶

For our next trick, we’ll be loading an Nginx HTTP server’s access log filesand extracting information about each HTTP request logged therein, storing itin a more structured manner in the fields of a Heka message. The first step istelling Heka where it can find the Nginx access log file. Except that theNginx log typically isn’t just a single file, it’s a series of files subjectto site specific rotation schemes. On the author’s Ubuntu-ish system, forinstance, the /var/log/nginx directory looks like this, at the time ofwriting:

This is a common rotation scheme, but there are many others out there. And incases where many domains are being hosted, there might be several sets of logfiles, one for each domain, each distinguished from the others by file and/orfolder name. Luckily Heka’s Logstreamer Input provides amechanism for handling all of these cases and more. The LogstreamerInputalready has extensive documentation, so we won’t gointo exhaustive detail here, instead we’ll show an example config thatcorrectly handles the above case:

The splitter option above tells Heka that each record will be delimited by aone character token, in this case the default token n. If our records weredelimited by a different character we could add a Token Splittersection specifying an alternate. If a single character isn’t sufficient forfinding our record boundaries, such as in cases where a record spans multiplelines, we can use a Regex Splitter to provide a regularexpression that describes the record boundary. The log_directory optiontells where the files we’re interested in live. The file_match is a regularexpression that matches all of the files comprising the log stream. In thiscase, they all must start with access.log, after which they can (optionally)be followed by a dot (.), then (optionally, again) one or two digits, then(optionally, one more time) a gzip extension (.gz). Any digits that arefound are captured as the Index match group, and the priority optionspecifies that we use this Index value to determine the order of the files.The leading carat character (^) reverses the order of the priority, since inour case lower digits mean newer files.

The LogstreamerInput will use this configuration data to find all of therelevant files, then it will start working its way through the entire streamof files from oldest to newest, tracking its progress along the way. If Hekais stopped and restarted, it will pick up where it left off, even if that filewas rotated during the time that Heka was down. When it gets to the end of thenewest file, it will follow along, loading new lines as they’re added,and noticing when the file is rotated so it can hop forward to start loading thenewer one.

Which then brings us to the decoder option. This tells Heka which decoderplugin the LogstreamerInput will be using to parse the loaded log files.The nginx_access_decoder configuration is as follows:

Some of this should be looking familiar by now. This is a SandboxDecoder,which means that it is a decoder plugin with the actual parsing logicimplemented in Lua. The outer config section configures the SandboxDecoderitself, while the nested section provides additional config information thatis passed in to the Lua code.

While it’s certainly possible to write your own custom Lua parsing code, inthis case we are again using a plugin provided by Heka, specifically designed for parsing Nginxaccess logs. But Nginx doesn’t have a single access log format, the exactoutput is dynamically specified by a log_format directive in the Nginxconfiguration. Luckily Heka’s decoder is quite sophisticated; all you have todo to parse your access log output is copy the appropriate log_formatdirective out of the Nginx configuration file and paste it into thelog_format option in your Heka decoder config, as above, and Heka will usethe magic of LPEG to dynamicallycreate a grammar that will extract the data from the log lines and store themin Heka message fields. Finally the type option above lets you specify whatthe Type field should be set to on the messages generated by this decoder.

Sending Nginx Data to ElasticSearch¶

One common use case people are interested in is taking the data extracted fromtheir HTTP server logs and sending it on to ElasticSearch, often so they can peruse that data usingdashboards generated by the excellent dashboard creation tool Kibana. We’ve handled loading andparsing the information with our input and decoder configuration above, nowlet’s look at the other side with the following output and encoder settings:

Working backwards, we’ll first look at the ElasticSearch Outputconfiguration. The server setting indicates where ElasticSearch islistening. The message_matcher tells us we’ll be catching messages with aType value of nginx.access, which you’ll recall was set in the decoderconfiguration we discussed above. The flush_interval setting specifies thatwe’ll be batching our records in the output and flushing them out toElasticSearch every 50 milliseconds.

Which leaves us with the encoder setting, and the correspondingElasticSearch JSON Encoder section. The ElasticSearchOutput usesElasticSearch’s Bulk API to tell ElasticSearch how the documents should be indexed, whichmeans that each document insert consists of a small JSON object satisfying theBulk API followed by another JSON object containing the document itself. Atthe time of writing, Heka provides three encoders that will extract data froma Heka message and generate an appropriate Bulk API header, theElasticSearch JSON Encoder we use above, which generates a clean documentschema based on the schema of the message that is being encoded; theElasticSearch Logstash V0 Encoder, which uses the “v0” schema format definedby Logstash (specifically intended for HTTP request data,natively supported by Kibana), and the ElasticSearch Payload Encoder, which assumesthat the message payload will already contain a fully formed JSON documentready for sending to ElasticSearch, and just prepends the necessary Bulk APIsegement.

In our ESJsonEncoder section, we’re mostly adhering to the default settings.By default, this decoder inserts documents into an ElasticSearch index basedon the current date: heka-YYYY.MM.DD (spelled as heka-%{2006.01.02} in theconfig). The es_index_from_timestamp = true option tells Heka to use thetimestamp from the message when determining the date to use for the indexname, as opposed to the default behavior which uses the system clock’s currenttime as the basis. The type option tells Heka what ElasticSearch record typeshould be used for each record. This option supports interpolation of variousvalues from the message object; in the example above the message’s Type fieldwill be used as the ElasticSearch record type name.

Generating HTTP Status Code Graphs¶

ElasticSearch and Kibana provide a number of nice tools for graphing andquerying the HTTP request data that is being parsed from our Nginx logs but,as with the stats data above, it would be nice to get real time graphs of someof this data directly from Heka. As you might guess, Heka already providesplugins specifically for this purpose:

As mentioned earlier, graphing in Heka is accomplished through the cooperationof a filter which emits messages containing circular buffer data, and theDashboardOutput which consumes those messages and displays the data on agraph. We already configured a DashboardOutput earlier, so now we just need toadd a filter that catches the nginx.access messages and aggregates the datainto a circular buffer.

Drivers Headrest

Heka has a standard message format that it uses for data that represents asingle HTTP request, used by the Nginx access log decoder that is parsing ourlog files. In this format, the status code of the HTTP response is stored in adynamic message field called, simply, status. The above filter will create acircular buffer data structure to store these response status codes in 6columns: 100s, 200s, 300s, 400s, 500s, and unknown. Similar to before, thenested configuration tells the filter how many rows of data to keep in thecircular buffer and how many seconds of data each row should represent. Italso gives us a preservation_version so we can flag when the data structureshave changed.

Once we add this section to our configuration and restart hekad, we should beable to browse to the dashboard UI and be able to find a graph of the variousresponse status categories that are extracted from our HTTP server logs.

Anomaly Detection¶

We’re getting close to the end of our journey. All of the data that we want togather is now flowing through Heka, being delivered to external data storesfor off line processing and analytics, and being displayed in real time graphsby Heka’s dashboard. The only remaining behavior we’re going to activate isanomaly detection, and the generation of notifiers based on anomalous eventsbeing detected. We’ll start by looking at the anomaly detection piece.

Drivers Health Card

We’ve already discussed how Heka uses a circular buffer library to track time seriesdata and generate graphs in the dashboard. Well it turns out that theanomaly detection features that Heka providesmake use of the same circular buffer library.

Under the hood, how it works is that you provide an “anomaly config”, which isa string that looks something like a programming function call. The anomalyconfig specifies which anomaly detection algorithm should be used. Algorithmscurrently supported by Heka are a standard deviation rate of change test, andboth parametric (i.e. Gaussian) and non-parametric Mann-Whitney-Wilcoxon tests. Includedin the anomaly config is information about which column in a circular bufferdata structure we want to monitor for anomalous behavior. Later, the parsedanomaly config is passed in to the detection module’s detect function, alongwith a populated circular buffer data structure, and the circular buffer datawill be analyzed using the specified algorithm.

Luckily, for our use cases, you don’t have to worry too much about all of thedetails of using the anomaly detection library, because the SandboxFilterswe’ve been using have already taken care of the hard parts. All we need to dois create an anomaly config string and add that to our config sections. Forinstance, here’s an example of how we might monitor our HTTP response statuscodes:

Everything is the same as our earlier configuration, except we’ve added ananomaly_config setting. There’s a lot in there, so we’ll examine it a pieceat a time. The first thing to notice is that there are actually two anomalyconfigs specified. You can add as many as you’d like. They’re space delimitedhere for readability, but that’s not strictly necessary, the parenthesessurrounding the config parameters are enough for Heka to identify them. Nextwe’ll dive into the configurations, each in turn.

The first anomaly configuration by itself looks like this:

The roc portion tells us that this config is using the rate of changealgorithm. Each algorithm has its own set of parameters, so the values insidethe parentheses are those that are required for a rate of change calculation.The first argument is payload_name, which needs to correspond to thepayload_name value used when the message is injected back into Heka’smessage router, which is “HTTP Status” in the case of this filter.

The next argument is the circular buffer column that we should be watching.We’re specifying column 2 here, which a quick peek at the http_status.luasource code will showyou is the column where we’re tracking 200 status codes. The next valuespecifies how many intervals (i.e. circular buffer rows) should we use in ouranalysis window. We’ve said 15, which means that we’ll be examining the rateof change between the values in two 15 second intervals. Specifically, we’llbe comparing the data in rows 2 through 16 to the data in rows 17 through 31(we always throw out the current row because it might not yet be complete).

After that we specify the number of intervals to use in our historicalanalysis window. Our setting of 0 means we’re using the entire history, rows32 through 1800. This is followed by the standard deviation thresholdparameter, which we’ve set to 1.5. So, put together, we’re saying if the rateof change of the number of 200 status responses over the last two 15 secondintervals is more than 1.5 standard deviations off from the rate of changeover the 29 minutes before that, then an anomaly alert should be triggered.

The last two parameters here are boolean values. The first of these is whetheror not an alert should be fired in the event that we stop receiving input data(we’re saying yes), the second whether or not an alert should be fired if westart receiving data again after a gap (we’re saying no).

That’s the first one, now let’s look at the second:

Drivers

The mww_nonparametric tells us, as you might guess, that this config will beusing the Mann-Whitney-Wilcoxon non-parametric algorithm for thesecomputations. This algorithm can be used to identify similarities (ordifferences) between multiple data sets, even when those data sets have a non-Gaussian distribution, such as cases where the set of data points is sparse.

The next argument tells us what column we’ll be looking at. In this case we’reusing column 5, which is where we store the 500 range status responses, orserver errors. After that is the number of intervals to use in a analysiswindow (15), followed by the number of analysis windows to compare (10). Inthis case, that means we’ll be examining the last 15 seconds, and comparingwhat we find there with the 10 prior 15 second windows, or the 150 previousseconds.

The final argument is called pstat, which is a floating point value between0 and 1. This tells us what type of data changes we’re going to be lookingfor. Anything over a 0.5 means we’re looking for an increasing trend, anythingbelow 0.5 means we’re looking for a decreasing trend. We’ve set this to 0.8,which is clearly in the increasing trend range.

So, taken together, this anomaly config means that we’re going to be watchingthe last 15 seconds to see whether there is an anomalous spike in servererrors, compared to the 10 intervals immediately prior. If we do detect asizable spike in server errors, we consider it an anomaly and an alert will begenerated.

In this example, we’ve only specified anomaly detection on our HTTP responsestatus monitoring, but the anomaly_config option is also available to thestat graph filter, so we could apply similar monitoring to any of the statsddata that is contained in our statmetric messages.

Notifications¶

But what do we mean, exactly, when we say that detecting an anomaly willgenerate an alert? As with nearly everything else in Heka, what we’re reallysaying is that a message will be injected into the message router, which otherfilter and output plugins are then able to listen for and use as a trigger foraction.

We won’t go into detail here, but along with the anomaly detection moduleHeka’s Lua environment provides an alert modulethat generates alert messages (with throttling, to make sure hundreds ofalerts in rapid succession don’t actually generate hundreds of separatenotifications) and an annotation modulethat causes the dashboard to apply annotations to the graphs based on ourcircular buffer data. Both the http status and stat graph filters make use ofboth of these, so if you specify anomaly configs for either of those filters,output graphs will be annotated and alert messages will be generated whenanomalies are detected.

Alert messages aren’t of much use if they’re just flowing through Heka’smessage router and nothing is listening for them, however. So let’s set up anSmtpOutput that will listen for the alert messages, sending emails when theycome through:

First we specify an encoder, using a very simple encoder implementation provided by Hekawhich extracts the timestamp, hostname, logger, and payload from the messageand emits those values in a text format. Then we add the output itself,listening for any alert messages that are emitted by any of our SandboxFilterplugins, using the encoder to format the message body, and sending an outgoingmail message through the SMTP server as specified by the other configurationoptions.

And that’s it! We’re now generating email notifiers from our anomaly detectionalerts.

Tying It All Together¶

Here’s what our full config looks like if we put it all together into a singlefile:

Drivers Headset

This isn’t too terribly long, but even so it might be nice to break it up intosmaller pieces. Heka supports the use of a directory instead of a single filefor configuration; if you specify a directory all files ending with .tomlwill be merged together and loaded as a single configuration, which ispreferable for more complex deployments.

Drivers Head

This example is not in any way meant to be an exhaustive list of Heka’sfeatures. Indeed, we’ve only just barely scratched the surface. Hopefully,though, it gives those of you who are new to Heka enough context to understandhow the pieces fit together, and it can be used as a starting point fordeveloping configurations that will meet your own needs. If you have questionsor need assistance getting things going, please make use of the mailing list, or use an IRC client to come visitin the #heka channel on irc.mozilla.org.





Comments are closed.