MapBrief™

Geography · Economics · Visualization

Mapping the Census and the Sincerest Form of Flattery

The life of an IPO-less entrepreneur is a curiosity, especially in difficult economic times. So when well-meaning folks ask “what is it like?”, I answer that while there’s great freedom in working on one’s own ideas, that’s counterbalanced by the realization that most of one’s ideas range from the merely unworkable to laughably money-losing. But failure can be a more effective teacher than success: stay in the game long enough one develops a bit of judgment in distinguishing wheat from chaff.

So when Steve Romalewski and his team at CUNY’s Center for Urban Research released their block-by-block race/ethnicity maps for 15 major cities, my immediate thought was “this is an extremely cool way to understand important patterns in a very bulky data set.”

My second thought?  “I need to steal this.”

    This CUNY map of racial/ethnic change using 2000/2010 Census inspired me...to thieve

 

(If you believe the famous Picasso quote “good artists borrow, great artists steal”, then ponder to what depths of mendacity a work-a-day web mapper willingly lowers himself.)

Working on the decennial redistricting process in Denver, we applied CUNY’s mapping technique to our own data and found a fascinating mix of gentrification, edge growth, and ethnic-group displacement that will have a substantial impact on the city’s politics in the next decade. So I thought making a web map covering Colorado’s Front Range (roughly Pueblo to Fort Collins) would be both intellectually interesting as well as a chance to work with some newer tools.

Map: Race/Ethnicity Along Colorado’s Front Range: Block-by-Block, 2000-2010

Like the CUNY map, we’re using the Bing basemap for its thorough neighborhood-scale labeling as well as easy REST tile access. Using a custom slim build of the latest OpenLayers dev release, we get the new touch support. Throw in a touch of CSS magic from Tobin Bradley’s GeoPortal project, and we get a decent-looking map that works on touch phones and tablets with little extra work. The tiles themselves were created in the TileMill cartographic studio: a true pleasure to use and a tool that will merit its own post next week.

(I’d be remiss if I didn’t mention that FOSS4G will have OpenLayers guys, Tobin Bradley of GeoPortal, and the Development Seed crew who created TileMill.)

iPad goodness with little extra effort

    New OpenLayers touch features + GeoPortal CSS give us iPad support the way we like it: out of the box

 

From a developer’s perspective, the map doesn’t “do much”: a couple of tile layers, a slider, and zoom-to buttons. No round-trips to a database, no drill down information. While unpaid side work definitely puts one in the mood to simplify, it’s also out of respect to the intended audience of non-technical users in Colorado. Because chances are they surf like you and I surf: with a bit of ADD and very low tolerance for anything that confuses and may require reading the Help. So the goal here was to create a pleasant 40-second experience: play with the slider, click a few buttons, and then on to the next thing. The 1% of the audience that is really into can get in touch for more.

It’s an exciting time for web mapping and cartographic design: roll up your sleeves and start stealing.

 

—Brian Timoney


 

Big Data is More Than More Data

As a buzz-worthy term “Big Data” has a lot going for it: easy to remember, vague enough that a shared clear meaning is always in doubt, and its own O’Reilly conference. Like programmers describing the merits of their software only in terms of the number of lines of code, talking about big-ness merely in terms of number of records in a database seems to be missing some larger point.

Though we in the geo world have experience with bulky data (e.g. LIDAR, maybe SCADA…), what’s looming with the sensor web, ubiquitous GPS, etc. is on a different scale altogether. One would hope that given our advantage of being schooled in the analysis and display of location our industry would have more than a leg up on our uninitiated brethren. But then haven’t we already seen the story  of companies who succeeded by understanding scale then figuring out the geo part later?

Paradoxically enough, success with Big Data may be as much a question of Art as of Science. Because the phrase “the data tells the story”–which was never true–is even more misleading in the context of Big Data due to its size and speed. A common analogy is that of sticking one’s face in front of an open fire hydrant: the expectation of the data telling its own story and you’d emerge a bit dazed, only able to conclude that you experienced some sort of odorless liquid.

unwieldy and unpleasant

                      Big Data is unwieldy and difficult to bend to one's will

 

Context and narrative are key no matter what data you’re dealing with, but without it Big Data in particular has little value. To use another analogy, the value of Big Data is directly correlated to an organization’s ability to mine big data for meaningful, actionable information.  The decidedly mixed record of the enterprise in doing anything interesting with their data besides storage, retrieval, and elementary reporting fueled this great take that there There’s no such thing as big data.” That’s why Michael Driscoll sees “Big Analytics” as a necessary complement to Big Data. This is where geo can shine: there is no more immediate context than location context; throw in spatial analysis and now you’re cooking with Crisco®.

~  ~  ~  ~  ~  ~  ~  ~  ~

So Big Data requires more than adding a couple of sub-select statements to your trusty SQL queries.  Parallel processing strategies, NoSQL databases, and much faster methods of moving data from server to browser (Node.js) are some of the weaponry needed to tame the beast. Now I mentioned O’Reilly’s Strata Conference in New York in September: who doesn’t feel smarter and better-looking at an O’Reilly conference despite (or because of) its $3245 price tag?  But I have a better deal:  how about high-fiber, roll-your-sleeves-up, geospatial-centric sessions on Big Data scalability, turbo-charged spatial analysis, and moving all these bits and bytes around in the web world for one-third the cost!  Now, truth in advertising, we’re a homely bunch, but that will improve after a few craft brews at our Mile High altitude. If money is no object, how player would it be to hit up FOSS4G in Denver, then be primed to drop the knowledge amongst the black-clad, rimless-specs set at Strata New York two weeks later?

Answer: very player.

 

—Brian Timoney

 

* photo courtesy of the hitthatswitch Flickr stream

 

Geospatial Open Source Has Gone Mainstream

With the FOSS4G conference in Denver five weeks away, one takeaway is already clear:  geospatial open source is drawing serious interest from a wider variety of sectors than we have seen in the past. My hunch is that in an ever-more competitive landscape with straitened budgets, there are three elements in particular that exert a broad appeal: technology, talent, and ideas.

Have a gander at the sponsor page: the presence of heavyweights ESRI, Google, and Oracle could be construed any number of ways. From scouting out emerging technology that may pose a competitive threat to looking for developer and managerial talent who know the geospatial realm, there is plenty of unique value that may not be showing up at their own large events.

But check out Newmont Mining:  the Fortune 500 company is not only a conference sponsor  but is also conducting a tutorial on rolling a cloud-based geo-collaboration system based on their own experience of using open source to coordinate their far flung international operations. Now that your eyebrow is raised, consider the presence of the NGA (National Geospatial-Intelligence Agency) who will be laying out its plan for a significantly more prominent role of geospatial open source in their operations.

Winds of change indeed.

It won't just be the usual suspects at the FOSS4G conference in Denver this September

 

Scanning the RSVP list to date yields more of the same: the premier federal IT contractors, a couple of large environmental firms, the premier statistical analysis software outfit (and a hot analysis/visualization start-up), a half-dozen of the biggest federal agencies, and a well known insurance company.

What accounts for this recent, broad acceptance?  While others have traced the evolution of proprietary vs. open source in the geo market, I suspect the thorniest (and potentially most profitable) challenges today are the ones that don’t lend themselves to “Next » Next » Finish” answers.  Shrink-wrapped software is very good at solving shrink-wrapped problems, and leaving “solutions” at the mercy of a vendor’s software update schedule is a competitive disadvantage.

Check out the program: there is content geared towards all levels of technical ability, including a day-long primer for newcomers that will directly address the business side of geospatial open source.

We’ll see you in Denver.

 

—Brian Timoney

 

* photo courtesy of the BostonBill Flickr stream