Display live emoncms energy data on your desktop

Alexander Price ‏@saelaenx has documented a neat setup where he uses Geektool on his Mac to pull his live energy data and current temperatures from emoncms. The data it's self comes from an emonTx, emonGLCD posted online with a NanodeRF.

It would be great to have a similar arrangement for Ubuntu Linux using open-source tools, that way I can have have the same on my desktop :-)

The same goes for an android widget to display my live energy data on my home screen...one for the future.

Great work Alex, read his full blog here and see below for a summary:

Since I got a few people asking about this on twitter I decided to write a quick blog post on how I set up my current Geektool arrangement. Obviously this method only works on a mac, but I imagine a similar result could be achieved on Windows using Rainmeter.
GeekTool Desktop Widget 

emoncms dashboard

emonGLCD with Alex's nice custom monospaced fonts

Read his full blog post here: http://saelaenx.wordpress.com/2012/09/27/geektool-setup/

Logo Ideas

Recently our friend and freelance artist/graphic designer Gwil Noble has knocked up a few logo possibilities for OpenEnergyMonitor.

Gwil has also made some gorgeous little context icons we will be using in our new front page re-design soon.




For the main icon there are three possibilities so far, their all in prototype stage, there is a possibility that we end up using something totally different! If you have any creative ideas for 'D', 'E' or even 'F' we would love to be able to add more options to the list.

We have setup a poll to enable you to vote on which logo you prefer: http://openenergymonitor.org/emon/logovote

Forum discussion thread here: http://openenergymonitor.org/emon/node/1093

Data portability: Version 1 complete!

Following on from the last series of posts detailing an implementation for exporting and importing data into emoncms to make it possible to move feed data between emoncms instances with ease, these features are now fully integrated in to the latest emoncms version.

I have also created a rough initial interface for selecting the remote account to download from and for selecting the feeds to download, you can get to this page by clicking on the Sync feeds button on the feedlist page:

To select an account enter the account URL and write apikey and click ok, after a moment the feed list of the remote account will appear. 

You can then click on the feeds you wish to download. This will enter the feeds into a download queue. As detailed in the last post the download and import process is run by a separate script.  The script is called import.php and can be found in the root directory of the emoncms install. You can either run this script manually or set up a cron job to run it periodically.


Once you start downloading feeds, if you refresh the page you will see the remote ahead by value decrease as data is downloaded starting with the oldest data first. It is also possible to sync the data again at a later data to bring the local copy up to date.

This feature makes for a really convenient way to backup feed data, as you only need to download the latest changes rather than all the data every time.

There are still a few things missing in the current implementation, that would be great to add:

- The first is that histogram data is not downloaded correctly as the import and export script only works at present with one column of data.

- It would be great to have a feature to select all the remote feeds to download, rather than having to click each one.

- It would be nice to have the sync interface page built in javascript/jquery to have it automatically update on progress without having to refresh the page.

But I think other than these which can be fixed and improved in time, its largely functional and good to go.

Data portability: Importing feed data into emoncms

The last post detailed an implementation for exporting feed data from emoncms, this post explores an implementation for importing feed data, which can be used in conjunction with the export implementation to sync or download feed data from one emoncms instance to another enabling full feed data portability.

I started by trying to create an import script that was build into a normal page action, you would go to a page ie:

sync/feed?id=10...

The problem with this is that downloading and importing a feed takes a long time and so the page will appear unresponsive for that time, the loading animation just turns and turns.

If you mistakenly click on the refresh button, the initial process will not close and so you will start to get duplicate processes which will mess up the data and increase the server load.

Ideally while downloading and importing the data there would be a progress bar so that you can see what's happening and any other useful feedback if things go wrong.

To get this kind of feedback you need to be able to update the browser as the process is running.

You also want the syncing to happen directly between servers rather than server to local computer to server as this would be faster in instances where both servers are remote and the internet access to the local computer is relatively slow, - i.e you want the import process handled in php which is server side (you could use another server side language if you wanted).

The way to get feedback about what's happening while having the import process being handled in php is to have two separate scripts one that handles the importing and another that can be refreshed regularly serves the gui webpage.

We can extend the idea of a separate import script to sort the problem of duplicate processes too by implementing a download and import queue.

When a user clicks to download/sync a feed in the sync page, the action behind this enters the details of the feed to be downloaded in the queue, it can also check at this point that it does not already exist in the queue.

We can then run the import script independently of the gui page, such as via a scheduled cron job that runs the script one a minute. We can put a check in there to ensure that only one instance of the script is running at any given time too.

The import script then checks if there are feeds to be downloaded in the import queue, working through each queue item sequentially.

The added advantage of the queue implementation is that we can keep the load that importing places on the server low allowing only one feed to be downloaded and imported at any given time.

So all in all this implementation:
  • Enables feedback on import progress.
  • Ensures we don’t get duplicate feed syncing processes.
  • Ensures that the server load is kept under control.
On to the code, here's the import script that works through the import queue and downloads and imports each item sequentially, I would be interested to hear if there are any ways the efficiency of the export and import process can be improved.

<?php

// Ensure only one instance of the script can run at any one time.
$fp = fopen("importlock", "w");
if (! flock($fp, LOCK_EX | LOCK_NB)) { echo "Already running\n"; die; }

// Connect to the database
define('EMONCMS_EXEC', 1);
require "Includes/process_settings.php";
require "Includes/db.php";
switch(db_connect()) {
case 0: break;
case 1: break;
case 3: show_dbsettingserror_message(); die ;
}

// Fetch the import queue
$result = db_query("SELECT * FROM importqueue ORDER BY `queid` Asc");

// For each item in the queue
while($row = db_fetch_array($result))
{
$queid = $row['queid'];
$feedname = "feed_".trim($row['localfeedid'])."";

// Check if we have already downloaded part of the feed and get the last
// value entered so that we dont download and insert data that has already
// been inserted this makes this utility useful for syncing in general
// and in particlar backup that only downloads the latest changes.
$feed_result = db_query("SELECT * FROM $feedname ORDER BY time Desc LIMIT 1");
$feed_row = db_fetch_array($feed_result);
$start = 0; if ($feed_row[0]) $start = $feed_row[0];

// Open the file served from the export page on the remote server
$url = $row['baseurl'].'/feed/export?apikey='.$row['apikey'].'&id='
.$row['feedid'].'&start='.$start;

echo "Opening file $url\n";
$fh = @fopen( $url, 'r' );

// Read through the file
$i = 0;
while (($data = fgetcsv($fh, 0, ",")) !== FALSE)
{
$feedtime = $data[0]; $value = $data[1];

if ($feedtime!='' && $value!='')
{
$i++;
//Contruct values part of the query
if ($i!=1) $vals .= ',';
$vals .= "('$feedtime','$value')";

// Execute query every 400 rows (same block size as export script)
if ($i>400)
{
$i = 0;
if ($vals) db_query("INSERT INTO $feedname (`time`,`data`) VALUES ".$vals);
$vals = "";
}
}
}
// If there are lines to be inserted left over insert them here at the end
if ($vals) db_query("INSERT INTO $feedname (`time`,`data`) VALUES ".$vals);

fclose($fh);

echo "Transfer complete\n";
echo "Deleting item $queid from queue\n";
db_query("DELETE FROM importqueue WHERE queid = $queid");
}

?>

In the next post I will explain the interface for generating the feed import queue.

Data portability: exporting feed data from emoncms

Following on from the previous two posts:
Making emoncms data portable
Data portability challenges

Here's my initial solution for feed export code that also allows it's load on the server to be regulated.

In trying to work this out I first tired requesting all the feed data (which could be millions of rows) in one mysql query and then writing the data to a file in one push, Testing on my laptop server installation and monitoring the load with the linux top tool the mysql query can use around 97% of the cpu for up to tens of seconds depending on the feed size. Once the query is complete the writing of the file (an apache process) then takes up another block of 97% cpu use until the file has completed downloading. Emoncms still seems fairly responsive in that time.

My next thought was to break up that larger query into smaller blocks and introducing a delay between the reading of each block, maybe the delay could wait for other actions to finish before continuing in future versions.

Spiting the query into blocks of 400 rows, alternately loading and writing to the file without a delay drops the load to around 50% for both the mysql and apache processes.

By introducing a delay of about 80 microseconds its then possible to drop the load down to roughly 10% for the apache process and a little less for the mysql process.

Increasing the block size or/and reducing the delay speeds the transfer rate up, reducing the block size or/and increasing the delay slows the transfer rate down.

Any thoughts on improving this would be welcome, I will try and add it to the emoncms repo soon.

Here's the code:


<?php
  // Open database etc here
  // Extend timeout limit from 30s to 2mins
set_time_limit (120); // Feed id and start time of feed to export $feedid = 1; $start = 0; // Regulate mysql and apache load. $block_size = 400; $sleep = 80000; $feedname = "feed_".trim($feedid).""; $fileName = $feedname.'.csv'; // There is no need for the browser to cache the output header("Cache-Control: no-cache, no-store, must-revalidate"); // Tell the browser to handle output as a csv file to be downloaded header('Content-Description: File Transfer'); header("Content-type: text/csv"); header("Content-Disposition: attachment; filename={$fileName}"); header("Expires: 0"); header("Pragma: no-cache"); // Write to output stream $fh = @fopen( 'php://output', 'w' ); // Load new feed blocks until there is no more data $moredata_available = 1; while ($moredata_available) { // 1) Load a block $result = db_query("SELECT * FROM $feedname WHERE time>$start  
    ORDER BY time Asc Limit $block_size");

$moredata_available = 0;
while($row = db_fetch_array($result)) {

// Write block as csv to output stream
fputcsv($fh, array($row['time'],$row['data']));

// Set new start time so that we read the next block along
$start = $row['time'];
$moredata_available = 1;
}
// 2) Sleep for a bit
usleep($sleep);
}

fclose($fh);

In the next post I will discuss creating a script to import feed data by pointing directly at the export script on the other server, plus a background running daemon process and queueing mechanism to manage the import process.

Data portability challenges

Continuing from yesterday's post on making emoncms data portable.

There are some challenges with implementing data portability for energy data. 5s energy feed data can grow to be quite large, I have several feeds that are approaching 100Mb and a total account size of about 700Mb and thats only just approaching one year.

Moving large quantities of data uses server resources including: cpu, memory and bandwidth. Emoncms.org is hosted on a BigV virtual machine, where you get 1GiB Ram, 25GiB Disk space, 200GiB transfer for £10 a month, which is a fantastic deal for those specs.

As exporting or importing large feeds takes time its important that it does not hog resources on the server making the server unusable for other users, any import or export code needs to use a small percentage of the server resources and ideally the amount of resources used could be adjustable. We also need to keep an eye on the data transfer limits.

In the next post I will post up the initial working feed export code.






Making emoncms data portable

One of the features I've wanted in emoncms for some time is the ability to move feed data between different server's with ease.

I want to be able to log in the emoncms instance running on my local machine and go to a page where I can enter the authentication details and location of my account on a remote emoncms server such as emoncms.org at which point it would bring up a list of the feeds that are available and I could either download particular feeds or just click download all and it would automatically download the whole account.

This would allow me to get at my data when I'm offline and it could serve as a personal backup of the data. Another need for this feature is we have an old remote emoncms instance that has some old energy data on, this feature would make it really easy to move the data to the new instance.

This feature could also work in a similar way to a ubuntu one or dropbox like service, there could be an automatic syncing option.

This would make the energy data very portable, data potability helps to create an open web, ensures that we can be in full control of our energy data, we can remove it from a service if we no longer want to be with a particular service, use it in other applications and for other applications that may not have been thought of in the original application design.

Over the next few blog posts I'm going to try and explore how this can be implemented. I've already got some working code, but a lot of work will need to be done to make this a nice well implemented feature, so if you can help with the coding or/and provide insight into better ways of doing things, that would be most welcome.

Cleanweb UK

Last Thursday Glyn and I attended the Ignite Cleanweb UK event in London, an event exploring the power and potential of the web to help us solve our environmental challenges.

The format of the talks where all 5 minute, 20 slides, 15 seconds per slide which was a challenge to create but makes for a really fascinating event where you here about a lot of different projects in a short space of time. Videos of all the presentations talks are up on the web here: http://www.cleanweb.org.uk/ignite.html.

Here's a video of our presentation on OpenEnergyMonitor:



The slides on their own can be viewed on slideshare:




Thanks a lot to James Smith (@floppy) for organising the event and inspiring us with the cleanweb uk manifesto at the end.

EmonTx and NanodeRF Code guides


These EmonTx and NanodeRF code guides go through the main components required to put a full EmonTx and NanodeRF (or ENC28J60 based OpenKontrolGateway) firmware together.

If your just starting out, you might find going through these guide examples first better than starting straight in with the fully fledged firmware examples.


Cleanweb UK - Ignite Evening - London 13th Sep

Thursday, September 13, 2012

7:00pm, at Forward Technology in London



Trystan and I are looking forward to attending and speaking at the first Cleanweb UK Ignite evening in London. It looks like it will be a great event, there will be 12 short (5min) fast paced talks to do with sustainability and the internet. 

Please come along and joins us, registration and full event details here: http://www.cleanweb.org.uk/ignite.html