What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

InfluxData

JSON


Press Release: InfluxData and Google Cloud Partner to Offer Leading Time Series Database on Google Cloud Platform

Permalink - Posted on 2019-04-09 16:00, modified at 16:36

 

 

 

New integrated solution emphasizes benefits of open source technologies and expands availability and distribution of InfluxDB to Google Cloud Platform

SAN FRANCISCO – April 9, 2019 – InfluxData, creator of the open source time series database InfluxDB, and Google Cloud today announced a partnership to integrate InfluxDB Cloud 2.0 with the Google Cloud Platform to offer a streamlined user experience. Google Cloud customers will have access to the new solution, InfluxDB Cloud on Google Cloud, directly on the Google Cloud Marketplace or through the Google enterprise sales team.

The explosion of DevOps (e.g., Kubernetes, microservices) and the Internet of Things (e.g., sensors) is motivating enterprises to look for ways to achieve better results, gain new insights and grow their customer bases through effective processing and analysis of time series data. IDC predicts that the global sum of data will grow to 175 zettabytes by 2025 and of that, 30 percent will be consumed in real time. As a result, time series databases are in high demand and this need will continue to grow. Through this partnership, Google Cloud customers will have easy access to the leading time series database to fit their organizations’ ever-growing needs.

“Bringing InfluxDB to Google Cloud was a natural choice for us given InfluxData’s proven history of customer-centric, open source innovation,” said Kevin Ichhpurani, Corporate Vice President, Global Partner Ecosystem at Google Cloud. “We’re committed to delivering best-in-class technology and services to our customers. Partnering with InfluxData and offering InfluxDB as a managed service on GCP will help us do this while continuing to foster a robust and customer-focused open source ecosystem.”

Google Cloud is partnering with top open source providers in several categories to bring the best of open source to its customer base. Customers will have the flexibility to develop with open source technology using the services from top partners integrated with GCP. This will include the ability to manage InfluxDB Cloud 2.0 from the Google Cloud console and have the same integrated billing, support and other deep product integrations with GCP.

“Google and InfluxData share a belief that the open source community fosters innovation and transparency that is unmatched by any proprietary product offering,” said Evan Kaplan, CEO of InfluxData. “Driven by increased instrumentation of the physical and software worlds, demand for time series is growing faster than any other database category. This close partnership amplifies our strategy to increase accessibility to InfluxDB and ultimately empower more developers with our time series solutions.”

Product Availability
InfluxDB Cloud on Google Cloud will be available in late 2019 to companies using Google Cloud Platform. Get updates and learn more about InfluxData’s partnerships with Google.

About InfluxData
InfluxData, creator of InfluxDB, delivers a modern open source platform, built from the ground up, for analyzing metrics and events (time series data) for DevOps and IoT applications. Whether the data comes from humans, sensors, or machines, InfluxData empowers developers to build next-generation monitoring, analytics, and IoT applications faster, easier, and to scale delivering real business value quickly. Based in San Francisco, InfluxData has more than 500 customers including Cisco, eBay, IBM and Siemens. For more information, visit www.influxdata.com. Twitter: @influxdb.

The post Press Release: InfluxData and Google Cloud Partner to Offer Leading Time Series Database on Google Cloud Platform appeared first on InfluxData.


InfluxDays 2019 NYC, Another Hit!

Permalink - Posted on 2019-04-08 15:00, modified at 05:28

InfluxDays NYC 2019 InfluxDB Open Source

InfluxDays NYC was my first InfluxDays, and to say I was impressed would be an understatement. After spending two days with community members and hearing about how they’re using our products coupled with all the talks, I was even more energized and excited about what we’re building.

We had a great turnout for the event — plus, it was livestreamed, so people from around the world who weren’t in New York City were able to tune in as well.

Day 1 Recap

Paul Dix started the first day off on a high note, talking about the future and vision of the InfluxData platform, giving a roadmap for InfluxDB 2.0. Next up was community member and InfluxAce Matt Iverson from Optum, a division of UnitedHealth Group, discussing their monitoring journey and how Optum is reducing snowflakes with automatic deployment using their open source tool they call Lighthouse. Tim Hall, our VP of Products at InfluxData, wrapped up the morning by showcasing how to set up and use InfluxCloud 2.0.

We came back after lunch and heard from Jacob Lisi, a Software Engineer at Grafana Labs, demonstrating Flux with Grafana, how Flux supercharges queries, discussing the features as well as moving parts for deployment, and how to use it with Grafana. Richard Laskey, Senior Software Engineer at Wayfair, took the stage next to share the monitoring best practices for InfluxEnterprise that they use to monitor their storefront. Next up, our Director of Engineering, Ryan Betts, talked about InfluxData internals, sharing lessons learned about scaling the platform across a large number of deployments.

Rounding out the day, we heard from Krisha Krishnaraju, Rajeev Tomer, and Karl Daman from Capital One about the need for architecting for disaster recovery for time series data as well as sharing their journey to plan and execute a disaster recovery plan. After wrapping up the talks, we had a cocktail happy hour, giving us a time to network with other attendees.

Day 2 Recap

On Day 2, we had workshops focusing on two tracks — v1 and v2, with talks covering topics like an in-depth intro to InfluxDB 2.0 and Flux queries, optimizing InfluxDB 1.0, building Telegraf plugins, architecting InfluxEnterprise, dashboarding, container monitoring, client libraries, and InfluxDB for IoT.

Thank You

Seriously, thank you to all of our users who came out from all over the continent to join us, and for the great speakers who shared their experiences and lessons learned. If you weren’t able to catch the event in NYC or on the livestream, or if you just want to see the talks again, we’ve got the videos, all of the slide decks from the speakers, and great photos. We also want to give a huge shoutout to Grafana Labs, AWS, and Google Cloud who sponsored this event!

InfluxDays 2019 London

If you missed InfluxDays NYC, no worries — InfluxDays 2019 London is only a few weeks away. We’ll be gathering on June 13-14, 2019 at The Brewery, 52 Chiswell Street, London, EC1Y 4SD. Like our NYC event, we’ll have talks on the first day with hands-on workshops on Day 2. The workshops will be on two tracks, a Getting Started Track and a More Advanced Track, guaranteeing that there really is something for everyone.

Registration is open and tickets are available from only £199. Use promo code BLOG to save 20%. See you in London!

The post InfluxDays 2019 NYC, Another Hit! appeared first on InfluxData.


InfluxData Jumpstarts 2019 with New Funding, Customers, Community Growth and More

Permalink - Posted on 2019-04-04 16:17, modified on 2019-04-05 18:34

Coming off the heels of a record year in 2018, InfluxData experienced continued momentum in the first quarter of 2019. We have achieved amazing growth across the organization — from new business wins to product enhancements — and we are well on the way to meeting (and even beating) our goals for this year.

Growing community

Active open source instances surpassed the 200K mark, reaching 213K daily instances. And we hosted nine meetups across the globe, engaging nearly 4,000 community members.

#1 time series database

InfluxDB continues to lead the time series database category, according to DB-Engines. The company also moved up in the overall database ranking to #34.

New customers

We surpassed 500 customers in Q1, including new business with Renault, the National Center for Biotechnology Information, Puppet Labs, McLaren Automotive and Gigaclear.

New products

We made continued enhancements to our products, including many maintenance and feature-bearing releases in Q1. The most significant news though was the launch of InfluxDB 2.0 Alpha. Various editions of InfluxDB 2.0 will roll out during 2019 and they include new features designed to make working with time series data simpler and more powerful for developers and companies. More updates coming soon.

Company news

In February, we announced $60 million in Series D funding. The round was led by Norwest Venture Partners and joined by Sorenson Capital and existing investors Sapphire Ventures, Battery Ventures, Mayfield Fund, Trinity Ventures and Harmony Partners.

We hosted a successful InfluxDays user conference in New York in March. It featured talks from customers, including Wayfair, Capital One, Oracle and Optum, and technical workshops. Videos of the presentations are available online!

Leadership team

We kicked off the year by welcoming two new members to the executive team — Jim Walsh as SVP of engineering and Will Paulus as VP of sales. We also welcomed two new esteemed members to the board of directors — Max Schireson, EIR at Battery Ventures and former CEO of MongoDB, and Rama Sekhar, managing director at Norwest Venture Partners.

New partnerships

InfluxDB is now integrated with VMware’s Pulse IoT Center, an IoT infrastructure management solution. The new offering embeds InfluxDB Enterprise and Kapacitor to optimize time series data handling from connected IoT devices, sensors and infrastructure running at the edge. InfluxData was also an early strategic partner of DigitalOcean’s new Marketplace. InfluxDB is now available on DigitalOcean Marketplace where developers can connect with easy-to-use partner-built solutions, enabling simple app development, deployment and scaling.

Recognition

InfluxData was identified as one of the “Top 25 IoT Startups to Watch In 2019” by Forbes.

Speaking engagements

The Influx team traveled around the world in Q1, delivering talks at conferences in Los Angeles to Cape Town, South Africa. They included:

The post InfluxData Jumpstarts 2019 with New Funding, Customers, Community Growth and More appeared first on InfluxData.


Introducing Our New InfluxData Community Slack Workspace

Permalink - Posted on 2019-04-03 15:00, modified on 2019-04-05 20:13

We’re thrilled to announce that, starting today, we have a public Influx Community Slack workspace to continue our commitment to building both great open source projects and a strong community. With the new Slack workspace comes yet another opportunity for our community members to become more engaged in what’s happening at InfluxData and to stay informed of ways that they can participate in projects and initiatives.

Join Influx Community on Slack

Getting started

We will continue to host our Community Forums, our Subreddit, and social media profiles; our Slack is not intended to do away with any of our community initiatives but to enhance what’s already happening. We’ve been working for a few weeks on the logistics of opening up the Slack workspace to the community to ensure that we’re creating a productive space. We’ve got channels set up for all of our open source projects like #influxdb, #telegraf, and #flux, #v2, #meetups, and #community-support.

Please just take a moment to check out the #code-of-conduct channel; we have an amazing community that is open and welcoming of everyone, but it’s a good refresher to understand what we’re building here.

nfluxData Community Slack Workspace

How do I join?

We’ve got a public invite that you can fill out to join here, or you can email us at community@influxdata.com for an invite, to ask questions, or express any concerns.

How do I access the new Slack workspace?

After you’ve requested your invite, you’ll be able to sign in at influxcommunity.slack.com.

What if I need a new channel?

Don’t see a channel for what you need for a project or location-specific initiative? Just ask in our #ask-an-admin channel and post a request or DM someone with the admin star next to their name.

We’re excited about this next chapter in our InfluxData community and look forward to seeing you in our new workspace.

The post Introducing Our New InfluxData Community Slack Workspace appeared first on InfluxData.


Release Announcement: Telegraf 1.10.2

Permalink - Posted on 2019-04-03 05:42

A new maintenance release for Telegraf is available now.

IMPORTANT NOTE: For anyone using the Grok Parser: String fields no longer have leading and trailing quotation marks removed in the Grok parser. If you are capturing quoted strings you may need to update the patterns.

This maintenance release of Telegraf 1.10.2 includes improvements to the following plugins:

  1. Agent
    • Resolved a potential deadlock when the agent is attempting to align aggregators.
    • Fixed aggregator window alignment.
    • Added owned directories to the rpm package spec.
    • Fixed drop tracking for metrics removed with aggregator drop_original option.
    • Resolved a panic during the shutdown of multiple aggregators
  2. Ceph Input (ceph)
    • Added the cluster stats that were missing to this input.
  3. DiskIO Input (diskio)
    • Fixed an issue reading major and minor block device identifiers
  4. File Output (file)
    • Fixed open file error handling.
  5. Filecount Input (filecount)
    • Fixed basedir check and parent dir extraction
  6. Grok Parser (grok)
    • The last character is no longer removed from String fields.
  7. Influx Parser (influx)
    • Fixed tags being applied to the wrong metric on parse error
  8. InfluxDB v2 Output (influxdb_v2)
    • Fixed plugin name in logging.
  9. Prometheus Input (prometheus)
    • Fixed parsing of kube config certificate-authority-data
  10. Prometheus Client Output (prometheus_client)
    • Removed tags that would create invalid label names
  11. StatsD Input (statsd)
    • No longer return from the Start method until the plugin is listening

The binaries for the latest open source release can be found on our downloads page.

The post Release Announcement: Telegraf 1.10.2 appeared first on InfluxData.


How to Write Points from CSV to InfluxDB

Permalink - Posted on 2019-04-01 15:00, modified on 2019-03-29 19:38

Telegraf is InfluxData’s plugin-driven server agent for collecting and reporting metrics. There are over 200 input plugins, which means there’s a lot of ways to get data into InfluxDB. However, I frequently see new Influx users inquiring about how to write points from CSV to InfluxDB on InfluxData Community Site. Writing points from a CSV file is an easy way to insert familiar data into InfluxDB, which can make it easier to get acquainted with the platform.

Requirements and setup for importing data from CSV to InfluxDB

In a previous blog, I shared three easy methods to insert data to the database (including a Python script to convert CSV data to line protocol, InfluxDB’s data ingest format). This blog is a guide for how to write points from a CSV using the Telegraf File Input Plugin. The Telegraf File Input Plugin writes points faster than a Python script would, which can be extremely useful when you’re performing a bulk import. With it, I hope to dispel any confusion that new users might have. I will assume that you are a MacOS user and have installed InfluxDB and Telegraf with Homebrew since it’s the fastest way to get up and running locally (alternatively, you can download the binary from our Downloads page or spin up the sandbox).

The accompanying repo for this blog post can be found here. To write points from CSV, we will be using the File Input Plugin with the CSV Parser.

Requirements:

  • Telegraf 1.8.0 or higher
  • InfluxDB 1.7.0 or higher

After installation, make sure that InfluxDB is running and Telegraf has stopped by typing the following command in your terminal: brew services list.

influxdb telegraf

If you don’t see that InfluxDB is running, execute brew services start influxdb. Similarly, if Telegraf is running, you can use brew to stop that service with brew services stop telegraf.

First, I need to download a Telegraf config file with the appropriate Input and Output Plugins. As per the Getting Started with Telegraf documentation, I will use the following command in the terminal in the directory of my choosing.

telegraf -sample-config -input-filter file -output-filter influxdb > file.conf

The -sample-config flag will generate the telegraf config file. The -input-filter and -output-filter flags specify the input and output sources of the data, respectively. The text following the > names the config file. I find it useful to name my telegraf config file after the Telegraf plugins I am using, so I can easily distinguish my config files in the future. After running the command, I open file.conf. My telegraf config has a total of 454 lines, complete with the File Input Plugin and the InfluxDB Output Plugin.

Four steps to CSV data ingest to InfluxDB

Step One: The first change I make to the config file is in the Output Plugins section. I want to specify the target database for my CSV data. I will change line 97 from the default # database = "telegraf" to database = csv (or any database name of my choosing, so that I can easily find the csv data).

My csv data looks like:

influxdb telegraf agent

The first column and row are junk. I’ve also commented out the last row. My timestamp is in Unix time with nanosecond precision. In Step Two, I make sure not to include those rows and columns in my data ingest by adding some lines to my config file.

Step Two: Next, I want to work on the Input Plugins section of the config. First I will specify the pwd of my csv file in line 455 of the telegraf config. Since my config file is in the same directory as my csv, line 455  is simply my file name: files = ["example"] (otherwise make sure to include the full $pwd). I will also add the following lines to the bottom of my config, under the Input Plugins section to ensure that I only ingest the data that I care about:

## Data format to consume.
data_format = "csv"

## Indicates how many rows to treat as a header. By default, the parser assumes
## there is no header and will parse the first row as data. If set to anything more
## than 1, column names will be concatenated with the name listed in the next header row.
## If `csv_column_names` is specified, the column names in header will be overridden.
csv_header_row_count = 1

## Indicates the number of rows to skip before looking for header information.
 csv_skip_rows = 1

## Indicates the number of columns to skip before looking for data to parse.
## These columns will be skipped in the header as well.
 csv_skip_columns = 1
 
## The character reserved for marking a row as a comment row
 ## Commented rows are skipped and not parsed
csv_comment = "#"

 ## The column to extract the name of the metric from
 csv_measurement_column = "measurement_name"

 ## Columns listed here will be added as tags. Any other columns
 ## will be added as fields.
csv_tag_columns = ["tag_key"]

 ## The column to extract time information for the metric
 ## `csv_timestamp_format` must be specified if this is used
csv_timestamp_column = "time"

 ## The format of time data extracted from `csv_timestamp_column`
 ## this must be specified if `csv_timestamp_column` is specified
 csv_timestamp_format = "unix_ns"

Extra config options: It’s worth taking a look at a couple of other variables in the config file.

Line 36 defaults to metric_batch_size = 1000. It controls the size of writes that Telegraf sends to InfluxDB. You might want to increase that value if you’re performing a bulk data import. To determine the appropriate metric_batch_size, I recommend looking at these hardware sizing guidelines. Finally, if you’re using the OSS version and trying to import several hundred thousand points, take a look at enabling TSI. Enabling TSI can help with series cardinality performance. Take a look at this link to learn about how to enable TSI in InfluxDB 1.7.

Line 69 defaults to debug = false. If you have trouble writing points to InfluxDB, set the debug variable to true to get debug log messages.

Step Three: Run Telegraf with the config file we just edited by copying and pasting the following line in your terminal:

telegraf --config $pwd/file.conf

You should see this output if you have debug = true:

Influxdb Telegraf Plugin - Telegraf Agent

Step Four: Now we’re ready to query our data. You can start the influx shell by running influx in your terminal.

Run use csv to select your database and verify that your inserts were successful with the following query: select * from measurement_name limit 3. Use precision rfc3339 to convert the timestamp into human-readable format.

Influx CLI InfluxDB Shell

That’s all there is to configuring the File Input Plugin and writing data to InfluxDB from a CSV file. Please note that the File Input Plugin accepts many other data formats including: json, line protocol, and collectd, just to name a few. If you need more help using the CLI, please look at the documentation.

Conclusions about importing CSV to InfluxDB

Finally, I highly recommend using a Telegraf plugin to write points to your InfluxDB database because Telegraf is written in Go. Go is much faster than Python, but if you have your heart set on using Python, I recommend checking out this csv-to-influxdb repo. One advantage to using the python script, found in that repo, is that it grants you higher column selection specification for your data ingest. By contrast, you can’t cherry-pick which columns you want to include with the Telegraf File Input Plugin — you can’t exclude invaluable columns sandwiched between valuable ones. This functionality might not be included in the Telegraf File Input Plugin because it would slow down the reads, which could be problematic for bulk data imports. However, Telegraf is completely open source. I encourage you to learn about how to write your own Telegraf plugin and contribute that functionality if you please!

I hope this tutorial helps get you started with Influx. If you have any questions, please post them on the community site or tweet us @InfluxDB.

The post How to Write Points from CSV to InfluxDB appeared first on InfluxData.


Why You Want Easy-to-Setup Grafana Dashboards

Permalink - Posted on 2019-03-29 15:00, modified on 2019-03-28 21:07

The list of reasons to adopt Grafana dashboards is long.

Among other things, Grafana dashboards are excellent tools for gaining insight into time-series data. In addition, Grafana readily integrates with InfluxDB and Telegraf to make monitoring of sensor, system and network metrics much easier and far more insightful.

The process of setting up a Grafana dashboard and integrating it with various data sources is straightforward. Grafana template variables enable you to create dynamic dashboards that you can make changes to in real-time.

In this post, we cover in more detail what you have to gain by setting up Grafana Dashboards and the easy steps involved to do that.

What is a Grafana Dashboard?

A Grafana dashboard is a powerful open source analytical and visualization tool that consists of multiple individual panels arranged in a grid. The panels interact with configured data sources, including (but not limited to) AWS CloudWatch, Microsoft SQL server, Prometheus, MySQL, InfluxDB,  and many others. Grafana is designed so that each panel is tied to a data source.  Because the Grafana dashboards support multiple panels in a single grid, you can visualize results from multiple data sources simultaneously.

The purpose of Grafana Dashboards

The purpose of Grafana dashboards is to bring data together in a way that is both efficient and organized. It allows users to better understand the metrics of their data through queries, informative visualizations and alerts. Not only do Grafana dashboards give insightful meaning to data collected from numerous sources, but you can also share the dashboards you create with other team members, allowing you to explore the data together.

Another key aspect of Grafana dashboards is the fact that they are open source, which allows for even more customization and power, depending on how comfortable you are with coding.  However, you do not need extensive knowledge of coding to create your own fully functioning Grafana dashboard.

How to set up a Grafana Dashboard

It is very easy to set up a Grafana dashboard. You begin by creating a new and blank Grafana dashboard by clicking on the Dashboard link, which is located on the right side of the Dashboard Picker.

Creating a new Grafana Dashboard

Create a new dashboard.

Dashboards contain panels, so now that you have a blank dashboard the next step is to add your first panel. Note that Grafana is shipped with a variety of panels to help you get started quickly. You can think of Grafana panels as visualization building blocks.

Panels are added via the Add Panel icon that is located at the top of the menu. Panels are not very useful unless some type of graph is associated with them. Graphs depend on data, so each panel that you add to the dashboard will be associated with a data source. To retrieve information from that data source for the panel, you will need to create a query.

A query is set up by editing the graph that appears on the new panel. Click on the graph title, followed by Edit. This will open up the Metrics tab, where you are presented with a Query Editor. This easy-to-use Query Editor allows you to build queries based on the data source for your panel. Grafana will take the results of the query and provide visualizations of the resulting metrics. Note that the Query Editor will allow you to build more than one Query for the data, thus supporting multiple series for graphs.

The Metrics tab is also where you change the graph style, apply functions to the group of metrics, set the auto-refresh rate for the visualization, and adjust properties such as time range controls and zoom. Graphs can include bars, lines, points, and multiple Y-axes. Grafana offers smart Y-axis formatting, axis labels, grid thresholds, and annotations.

Once you have created the panels that you want, building the dashboard becomes a simple process of drag and drop. Drag the panels to the dashboard grid and drop them where you want them to appear. Then, resize them to suit your needs.

If you do not feel comfortable starting your own Grafana dashboards from scratch or don’t know how to set up a Grafana dashboard, there are official Grafana dashboard examples available on the Grafana Labs website. In fact, this is where you will find the best Grafana dashboards.

Grafana Dashboard examples

Find Grafana Dashboards for use with InfluxDB.

These Grafana dashboard examples can be filtered based on the collector used. Some of these collectors include Beats, Icinga, Snap and Telegraf.

You can also filter these examples based on panel type. Text, table, trend box, annunciator, boom table, breadcrumb, and alarm box are just a few of the many panel types for which you can find Grafana dashboard examples. If connecting to a data source is your primary concern, you can also filter the available dashboard examples by the data source. This allows you to see Grafana dashboard examples for sources such as AWS CloudWatch, Amazon Timestream, Prometheus, Elasticsearch, InfluxDB and many others.

Grafana Dashboard Templates

Grafana dashboard templating is used to make your dashboards more interactive. In short, you create dashboard Template variables that can be used almost anywhere in a Grafana dashboard. The use of variables allows you to make dynamic, on-the-fly changes to the dashboard. This significantly adds to the usefulness and power of Grafana.

Conclusion

The Grafana dashboard is a powerful data analytics and visualization tool that integrates with a wide variety of sources that store time series data, including the data source InfluxData. The Grafana dashboard is open source and provides extensive documentation that is user-friendly.  For those new to Grafana, remember that there are resources where you can find the best Grafana dashboards to help you get started. For those interested in more dynamic dashboards, Grafana dashboard templates would be your starting point.

When Grafana dashboard is integrated with an InfluxDB data source, you have access to customized visual presentations of key metrics and events with fast rendering, even over large time spans. Grafana dashboards, when integrated with InfluxDB, provide extremely useful and revealing visualization of the system and network metrics.

If you need access to real-time analytics based on time series data, InfluxData is the platform for you. If you want to delve deep into the meaning of that critical data, then combining InfluxData with Grafana dashboard is the smartest choice you can make. Choose InfluxData and create the best Grafana dashboards.

The post Why You Want Easy-to-Setup Grafana Dashboards appeared first on InfluxData.


Release Announcement: InfluxDB 2.0.0 Alpha 7

Permalink - Posted on 2019-03-29 01:25

A new release of InfluxDB 2.0 Alpha is available now. As described in our CTO Paul Dix’s original release announcement for InfluxDB 2.0, we will be shipping regular updates as we add new features and fix issues. Please keep in mind that these alpha builds are not meant for testing performance or production usage, but more for giving feedback on the functionality, user experience, and APIs.

This release of InfluxDB 2.0 Alpha includes the following enhancements:

  • You can now import and export everything you create in the UI including Dashboards and Tasks. This allows you to share what you create easily with other users.
  • We have introduced Variables powered by Flux into the UI. This allows you to build more flexible dashboards and queries through the UI. These are similar to Template Variables in Chronograf.
  • Updated Flux library to version 0.23.0. Check out the Flux library release for the latest updates.

Please download and explore our latest iteration. If you find issues or have questions, please post them in our InfluxDB Github Repo or our Community Site and we will take a look. Thank you for all the interest and positive feedback so far.

The latest release of InfluxDB 2.0 Alpha can be downloaded here.

The post Release Announcement: InfluxDB 2.0.0 Alpha 7 appeared first on InfluxData.


Publishing Data to InfluxDB from Swift

Permalink - Posted on 2019-03-28 20:32, modified at 23:46

I’ve been a very busy man. It was only a few days ago that I wrote about a new InfluxDB library for writing data from Arduino devices to InfluxDB v2 and here I am again, writing about a new library for writing data to InfluxDB. This time, it’s in Swift. Now your native Apple apps can write data directly to InfluxDB v2.0 with ease.

It’s a really simple library to use, and you can download the entire Xcode project for it from my GitHub. You can use it to write single data points to the DB, or to do bulk writes of any size. Here’s a quick tutorial on how to use it.

let influxdb = InfluxData()

That gets you an instance of the InfluxData class. Once you have that, you’ll need to set some configuration parameters for it.

influxdb.setConfig(server: “serverName", port: 9999, org: “myOrganization", bucket: “myBucket", token: “myToken")

You will, of course, need to set all those values according to your InfluxDB v2.0 server’s settings. You can also set the time precision with

let myPrecision = DataPrecision.ms // for Milliseconds, ‘us' for microseconds, and ’s’ for seconds
influxdb.setPrecision(precision: myPrecision)

At this point, you’re ready to start collecting data and sending it to InfluxDB v2.0! For each data point you collect and want to store, you will create a new Influx object to hold the tags and data.

let point: Influx = Influx(measurement: “myMeasurement")
point.addTag(name: “location”, value: “home”)
point.addTag(name: “server”, value: “home-server”)
if !point.addValue(name: “value”, value: 100.01) {
    print(“Unknown value type!\n)
}
if !point.addValue(name: “value”, value: 55) {
    print(“Unknown value type!\n)
}
if !point.addValue(name: “value”, value: true) {
    print(“Unknown value type!\n)
}
if !point.addValue(name: “value”, value: “String Value" {
    print(“Unknown value type!\n)
}

As you can see, it accepts Integers, floating point values, Booleans and strings. If it cannot determine the data type, it will return the Boolean false so it’s always a good idea to check the return value.

For best performance, we recommend writing data in batches to InfluxDB, so you’ll need to prepare the data to go into a batch. This is easy to do with a call to

influxdb.prepare(point: point)

And when it’s time to write the batch, just call

if influxdb.writeBatch() {
    print(“Batch written successfully!\n)
}

Again, writeBatch() returns a Boolean on success or failure, so it’s a good idea to check those values.

If you want to write each data point as it comes in, just take the data point you created above and call

influxdb.writeSingle(dataPoint: point)

You can write data to multiple measurements simultaneously as each data point is initialized with its measurement, and you can add as many tags and fields as you’d like.

This is really the first pass at the InfluxDB v2.0 Swift library as I’ll be adding the ability to query, create buckets, and a lot of other features of the Flux language to the library in the future, but since what most people want to do right away is write data to the database, I thought I’d get this out there.

I hope this is helpful! I know it has been for me! You see, I have lately been just using my Mac laptop to grab data off of my Bluetooth CO2 sensor that I built. In order to do that, I built a small BLE application that connects to the sensor, subscribes to the data ID, and constantly writes the data to InfluxDB. Needless to say, I used this library and have been scraping this data and storing it happily.

Publishing Data to InfluxDB from Swift
I’d love to hear what you plan to do with a Swift Library for 2.0 so be sure to follow me on twitter and let me know what you’re doing!

The post Publishing Data to InfluxDB from Swift appeared first on InfluxData.


Writing Data from Arduino to InfluxDB v2

Permalink - Posted on 2019-03-26 16:01

As InfluxData moves ever closer to releasing v2.0, it’s becoming increasingly important to be able to get data into InfluxDBv2, of course. Makes sense, right? Since the vast majority (like, indistinguishable from 100%) of my data comes from IoT devices, I decided it was time to start making those devices InfluxDB v2-capable.

I’m happy to say that the first step in that direction is now complete! One of my favorite sensors is a particulate matter sensor that measures the amount of very small particulate in the air (from 2.5µM to 100µM in diameter). This stuff, it turns out, is really, really bad for you. So knowing how much is in the air is a good idea. To that end, I ordered one of these sensors from Adafruit:

3686 10

It’s small and easy to hook up to pretty much anything since it just spews data out via UART. Since I have a giant pile of ESP8266 boards lying around (I typically order them by the dozens since they are so cheap and easy to deal with), I hooked it up to one of those. The code was simple, thanks to Adafruit providing it, and there was a handle InfluxDB library to write data with, but it only supported InfluxDB v1.x. The first thing I did (because I was in a hurry) was to grab the 1.x library and just re-write it for 2.x. This took me about 1/2 hour or less, and it worked great! (you can use that version here if you’d like). That really wasn’t the right solution though. So today I went back and created a proper fork of the original repository, and updated it to support either version 1.x or version 2.x of InfluxDB. I’ve of course submitted a proper Pull Request against the original library and hope that it will be accepted/merged soon.

Let’s walk through what it takes to use this new library then. It’s dead simple, really. At least with Arduino, all you have to do is add the Library, then include it in your sketch:

#include <InfluxDb.h>
//#include <InfluxDataV2.h> // if you want to use the other library I built and that’s in my GitHub 
#define INFLUXDB_HOST “myhost.com"
Influxdb influx(INFLUXDB_HOST);

That gets you started. Next, you’re going to need some specific information from your InfluxDB v2.0 (alpha still!) installation. Notably, you will need the organization, bucket, and token that are associated with your account. You can find these by pointing your web browser at your InfluxDB server, port 9999, entering your username and password, and going to the Configuration Page:

Screen Shot 2019 03 22 at 1 26 56 PM

You can then enter them into the Arduino Sketch:

influx.setBucket(“myBucket");
influx.setVersion(2);
influx.setOrg(“myOrg");
influx.setPort(9999);
influx.setToken(“myToken");

Once you’ve done that, in your setup() function, you can start writing data to your v2.0 Influx server!

void loop() {
    loopCount++;
    InfluxData row("temperature");row.addTag("device", "alpha");
    row.addTag("sensor", "one");
    row.addTag("mode", "pwm");
    row.addValue("loopCount", loopCount);
    row.addValue("value", random(10, 40));
    influx.write(row);delay(5000);
}

See? I told you it was easy!

The post Writing Data from Arduino to InfluxDB v2 appeared first on InfluxData.