Leaflet in R – West Nile Virus Map

Recently I finished working on a demonstration for a West Nile Virus map. I found myself referring back to my example often, so I thought that if it’s useful for me, maybe it will be useful for someone else!

Most of the data I was using was already in the public domain, but it only took a few edits to rely 100% on public data. Now I have a nice shareable example.

I’m not going to try an embed it into this post, here’s a link to the map, and here’s the source code for reference.

Also, I’ve started a github project to store some of my most often used map examples, hopefully that stay updated: https://github.com/geneorama/wnv_map_demo/

This is what the map looks like:

I did normal leaflet things, like used the values to control the circle size, customized opacity to make it easier to see the map below, and I made the red circle plot on top of the blue circle to give a sense of the proportion of mosquitoes affected.

However I learned some new tricks in this map:

  • Of course, data.table provides a fast and flexible way to manipulate and reshape the data
  • I developed my own Mapbox template to mimic the look and feel of our Opengrid application
  • I developed my own HTML pop-ups using htmltools::HTML

please use the Rmd file to adopt for your own purposes!

Happy mapping, and remember to use your DEET based bug repellent.

Big updates for the geneorama R package

The geneorama package was heavily updated today.

To install geneorama use devtools: devtools::install_github("geneorama/geneorama")

The main changes include:

  • Documentation! Yes, now there are actual help files complete with examples. I’m using roxygen2 to manage the NAMESPACE and compile the help documents
  • Unit tests – I decided it was time to practice what I preach, and add some unit tests to my package. Now that I’m using it in more and more places it only makes sense that it has some basic tests.

The complete diff is on GitHub if anyone’s interested.

Bias Variance Tradeoff

This is my first Knitr document, which lets the user combine R code and text in a single formatted document.

I wanted to have an accessible example that illustrates the bias variance tradeoff.


An illustration of the Bias Variance Tradeoff


An illustration of the Bias Variance Tradeoff

by Gene Leynes
http://geneorama.com/
http://www.linkedin.com/in/geneleynes

Summary

The Bias Variance Tradeoff is an important concept in machine learning. This concept helps you evaluate which model will work the best.

When most people think of fitting a model, something like this comes to mind:
plot of chunk unnamed-chunk-1

Where you basically just draw the best straight line though some points. This paradigm makes it hard to imagine what some one would mean by “model selection”.

The bais varance problem arises when you start to use non linear models that don't have to follow straight lines.

If you consider this data fit with two different smoothing parameters:
plot of chunk unnamed-chunk-2

you can get a sense of the problem.

Intuitively the plot on the left seems to do a better job at representing the information contained in the data… However the model on the right has absolutely no error.

This is the bias variance tradeoff.

Continue reading

Geneorama package now available

I now have a geneorama package available.  It’s not on CRAN, because it’s not even remotely documented. I do hope to do that at some point, but not today.

You can install it by simply opening up this file!

Be warned: Opening this file will modify your rprofile.site file (located in R\Rversion\etc).
The script will add the text “library(geneorama)” to end of the profile file, if it doesn’t already exist, which will automatically load the geneorama package when you start R.

http://geneorama.com/code/Install Geneorama.RData

EDIT (2018): Do not install this way (I have removed the file so that you can’t download it). Use `devtools::install_github(“geneorama/geneorama”)` instead.

The installation works on both a PC and a mac.  The automatic installation uses the .First function to simply copy the library files to your R Program file location.

Continue reading

The “Data Scientist” explained: more than just a buzzword

We live in a brave new world where people possess far more data than ever before in history, but the amount of information we have per unit of data has never been lower. As people struggle to make sense of all this data, several new terms emerged, such as: Big Data, Business Intelligence, Map-Reduce, and Data Scientist… but what do these new words mean?

Definitions for these terms continue to emerge, but I’d like to share what I’ve learned about the “Data Scientist”.

I was inspired to write this because of an email I just got from Kaggle. (For those of you who don’t know, Kaggle is a website that offers analytical challenges. These challenges are open to anyone, and the best answer wins prize money that ranges from hundreds to millions of dollars.)

Kaggle’s Anthony Goldbloom offers this self promotional but awesome tidbit that helps explain the role of the data scientist:

Thus who you decide to hire as your first data scientist — a domain expert or a machine learner — might be as simple as this: could you currently prepare your data for a Kaggle competition? If so, then hire a machine learner. If not, hire a data scientist who has the domain expertise and the data hacking skills to get you there.

Recently, I was reading about Map-Reduce, and I came across another nice explanation of the data scientist. This explanation is more comprehensive, yet still concise.

Data scientists use a combination of their business and technical skills to investigate big data looking for ways to improve current business analytics and predictive analytical models, and also for possible new business opportunities. One of the biggest differences between a data scientist and a business intelligence (BI) user – such as a business analyst – is that a data scientist investigates and looks for new possibilities, while a BI user analyzes existing business situations and operations.

Data scientists require a wide range of skills:

  • Business domain expertise and strong analytical skills
  • Creativity and good communications
  • Knowledgeable in statistics, machine learning and data visualization
  • Able to develop data analysis solutions using modeling/analysis methods and languages such as MapReduce, R, SAS, etc.
  • Adept at data engineering, including discovering and mashing/blending large amounts of data

People with this wide range of skills are rare, and this explains why data scientists are in short supply. In most organizations, rather than looking for individuals with all of these capabilities, it will be necessary instead to build a team of people that collectively has these skills.

Map Reduce and the Data Scientist, by Colin White (January 2012)

Granted, there’s an element of self-promotion here too, but this is a great description. I’ve had a hard time explaining my professional value proposition when I meet new people, because there are so many new concepts involved in my areas of specialization, and this description is quite helpful.

As companies are recognizing their need for someone to fill this role of the data scientist, they’re clearly struggling to define the role, advertize for the position, and evaluate candidates. Often they are overly focused on technical requirements, and they’re seeking a PhD in machine learning, or someone with years of database programming experience.

It seems to me that they usually need someone who understands concepts like cross validation, or decision trees, and knows more than the difference between a flat file and a relational database, but the most important thing is that they need someone who can understand business problems, communicate to business leaders, and appreciate the technical considerations for application development.

Update:
Better link and explanation here
http://radar.oreilly.com/2010/06/what-is-data-science.html

Update: Sunshine in Chicago compared to Anchorage and Miami

I updated the last post to include a link to the source code, and updated the plots with attribution to the data source, timeanddate.com

Based on Jim’s comment on the last post I thought it would be easy to re-run the analysis for Anchorage.  However, the Anchorage data was more difficult to handle, due to a period of continuous twilight at various times in the year.

So, as a workaround I just downloaded the tables for sunrise and sunset.   Personally,  I was more curious about Miami than Anchorage… but they are both easy to run with the new code.

Here’s what we gain / give up in terms of daylight for these locations.

Also, I thought that the speed at which the days change was much more interesting when comparing cities:

The way that the website deals with a

Sunshine in Chicago

My favorite day of the year is December 21, because that’s the day where the days finally start getting longer.

I’ve always wondered how quickly we gain and lose time as the seasons change, and so I thought I would try “scraping” the data off the web.  Here is that result:

Although it’s interesting to note to see where the days are getting shorter and longer, something else grabbed my attention along the way to this graph.  I was interested by the effect of daylight savings on our day.

In my younger days I loved that magical weekend when we “gained an hour”, because it felt easier to wake up for at least one Monday a year.  These days I feel much more anticipation for the spring ahead weekend, when we regain our fair share of sunshine.

Here’s what our days look like currently (with daylight savings):

Here is what our days would look like without daylight savings:

Source code: http://geneorama.com/code/SunriseSunsetExample/

Installing StatET

EDIT: Completely ignore the advice below. R Studio is now the way to go for an R development environment. It was a viable alternative about a year after I wrote this post, and now it’s hands down the only way to go.

About StatET and Eclipse

StatET is a powerful plug-in that allows you to use R inside the Integrated Development Environment (IDE) known as Eclipse. The features in Eclipse make it easier to write code in R, unless perhaps you’re already using something more sophisticated.

Eclipse has a reputation for having a “steep learning curve”. However, I have found it to be useful even if you barely know what you’re doing. The more you learn, the more useful it becomes.

StatET has a reputation for being difficult to install. There are a few things that tricky for non-programmers. Hopefully this post will make those things more obvious.

StatET is written by Stephan Wahlbrink. The official website and more detailed instructions can be found here:  www.walware.de

System Requirements

I will be showing you how I installed the plug-in for Eclipse Indigo, using R 2.14.1. I’m using a Windows XP machine. The process is similar for Windows 7.

My Steps
Continue reading

How to upgrade to a new version of R

I updated to R 2.14.1 for the StatET instructions post (forthcoming).  While doing that, I noticed some upgrading instructions in R’s Frequent Asked Questions.

upgrade txt from FAQ 2.8

I gave it a try, but the results were a little annoying.  First of all, I had to be careful to copy over only my custom libraries, and not the core libraries (like “base” and “stats”).

Then, when I issued the update commands:
## The FAQ had ask=FALSE, but I wanted to see what was going on,
## so I set ask=TRUE
update.packages(checkBuilt=TRUE, ask=TRUE)

Unfortunately, the update.packages command updated nearly every custom package, and (oddly) a few core packages as well.  Also, I was expecting “update” to mean “just update missing files”. However, “update” meant “download the whole package and install from scratch”. So it didn’t save time or bandwidth.

I found it easier to run these commands to list the folders that are in the old library, but not in the new one:
OldFolders = list.files('C:/Documents and Settings/Gene/My Documents/R/win-library/2.13')
NewFolders = list.files('C:/Program Files/R/R-2.14.1/library')
OldFolders[!OldFolders %in% NewFolders]

Note that in 2.14 they seem to have gone back to storing the libraries in the “Program Folder” rather than in “My Documents”.  I think the original switch to “My Documents” was a work around to avoid needing admin privileges every time you install a new package / library.

Then I manually installed the libraries one by one using “install.packages”, e.g.:
install.packages(‘earth')
install.packages('zoo')
install.packages('rJava')
install.packages('tkrplot')

The manual installation is useful because
•    Some of libraries might not be available on CRAN
•    You might not need all your old libraries
•    Some libraries install dependencies, so you can skip the dependences

Every so often I would rerun the oldfolders / newfolders code to check what was still needed.