High Earth Orbit
Status
@cairnsim offer accepted @georgethomas - though I'm bearish on "ontologies" and focus first on usable # 13 hours ago
Location
Washington, DC
spacer spacer


OpenStreetMap

« Previous Entries
02 Mar 2012
Arlington, VA

FourSquare and OpenStreetMap

Published in OpenStreetMap, Technology


spacer Earlier this week FourSquare announced that they switched their website maps from Google Maps to OpenStreetMap data hosted by MapBox. In what has been a growing trend of broader adoption, FourSquare remarks the utility and success of OpenStreetMap. Additionally it’s another movement in the recent switch2osm campaign since Google began requiring paid licensing for high-usage of the once completely free Google Maps.

Currently the switch is only for the website, which I admit I have used less than a dozen times and the mobile application will still be using the native Google Maps libraries. There are a number of valid reasons for this, not least of which is that Google is not yet charging for mobile maps usage, though I imagine it only a matter of time before they do and also for developers to build comparable mobile mapping libraries for OpenStreetMap.

Value of the Basemap

There are several intriguing aspects of this announcement as well as the reaction. First is that the change of the basemap, while intriguing to the geospatial and data communities, is likely highly irrelevant to most FourSquare users. Would there have been much news had the switch been to Microsoft Bing maps? Probably not. The interest is clearly impacted by the community, and general good will, of the OpenStreetMap project. Each adoption by a major company further verifies its value, as well as solidifies its continuity as organizations build their own business with OpenStreetMap as a core component.

Second is that there have been a number of companies whose primary, or recent, goal has been to be a trusted provider of OpenStreetMap basemaps. CloudMade, started by one of the founders of OpenStreetMap Steve Coast, was created for exactly this purpose. Additionally MapQuest has been using OpenStreetMap as a tactic to increase adoption of their long-standing mapping platform as well as insure themselves against likely increasing commercial data provider costs. However it was an extremely recent technology, albeit from a longer established company, to be the one to provide the OpenStreetMap basemap for FourSquare.

Development Seed’s MapBox is truly a compelling creation of technology and innovation. They have done extremely well adopting the best of breed software, and the development team that built it, with Mapnik. And they combined it with new technology to make it fast, and a differentiating and compelling story for developers by using Node.js. Technical details aside, the design and thought into the representation of OpenStreetMap clearly was a key differentiator in FourSquare using Mapbox to serve their OpenStreetMap tiles. And I’ll also add that Development Seed is a local DC and East Coast company – something I don’t doubt was interesting to the New York based FourSquare in pushing against the typical Silicon Valley technology scene.

Of course, in the end it is just a basemap. This is the background canvas that contains the actually valuable information that FourSquare has gathered and users engage with. The switch from Google Maps to OpenStreetMap does not in any way change the value or usage of the FourSquare application and community. Technically there is no real difference – it’s possible to restyle most any basemap today, and I imagine the switch from one provider to another was a relatively trivial code switch. FourSquare, or others, could just as easily switch to a new basemap if it was important to them as a business or their community.

More than a Basemap

spacer What I am most excited about, and believe FourSquare has an almost unique potential to enable, is the adoption of OpenStreetMap as more than just the canvas for visualizing check-in’s and user activity. OpenStreetMap’s true value is that it is an open, editable, relational database of geographic data – where the basemap is merely one way to access the information. What makes OpenStreetMap the future of location data is that the information can only get better, more up to date more quickly, and better representative of unique and varied views of a person’s place.

Several years ago Dennis and I had a conversation just after the initial launch of FourSquare about the potential of using OpenStreetMap. At the beginning, FourSquare only worked in specific cities, and in his considering how to expand it everywhere the options were between having a blank database and having an OpenStreetMap populated dataset. Obviously the tremendous potential was having the then nascent community of FourSquare users using and updating OpenStreetMap data. Unfortunately for usability and I assume business reasons (e.g. build your own database that you can own) FourSquare didn’t adopt OpenStreetMap at that time.

However, imagine if FourSquare adopted just this technique. Leverage their millions of users to improve the OpenStreetMap database. OpenStreetMap itself suffers from the common platform issue of being everything to everyone. This is confusing for new users that want to contribute to know where to begin. They may just want to include the road in front of their house – or the park down the street and the great coffee shop they frequent. Unfortunately the interface for performing these activities often requires understanding of British terminology of places and an overwhelming choice of categories, tags, and drawing options.

FourSquare by contrast is forced to be simple and focused. Users are quickly engaging and disengaging with the application that should capture the data and reflect it to the user for verification. Because my activity is being tracked, FourSquare can know that I’m on foot in the US and in an urban area, so don’t start by showing me hiking trails, or highways but show me restaurant and relevant places of interest – allowing me to dive deeper if I want to but making it simple for the casual user to improve the data. I believe that only through simple and focused user applications will OpenStreetMap broadly enter into the common use and be able to reach the end tail of location data.

Of course, this assumes FourSquare, specifically the investors and board, don’t see their user collected place data as a key and protected dataset. There have been enough POI selling companies in a dying market. There are now businesses such as Factual, and still CloudMade, who are focused on making this data openly available – though themselves as brokers to the data.

Despite continuing to cross numerous impressive adoption hurdles and over seven years of development, OpenStreetMap is still a young project. Its adoption by FourSquare is indeed another momentous occasion that heralds optimism that it will continue to grow. And as companies like Development Seed, CloudMade, MapQuest and others adopt OpenStreetMap as a core to their business – providing not just services but truly engaging with the community and providing focused context and value, OpenStreetMap will only get better.


04 Feb 2011
San Jose, CA

Automatic Road Detection – the Good and the Bad

Published in OpenStreetMap


spacer

Yesterday Steve Coast announced that Bing had released a new tool for doing automatic road detection using satellite imagery. The concept is definitely interesting as it provides a way to rapidly generate road data over the entire globe without need of manual tracing.

However, I remarked that it was particularly interesting that Steve was working on this. Several years ago, when OpenStreetMap was still an ambitious but unproven concept many people argued that road detection was a useful, and perhaps necessary, mechanism for actually capturing all the road data. Steve was quite adamant that while it was possible – and he demonstrated it – it wouldn’t work for other reasons.

OpenStreetMap is more than just a set of lines that render to nice maps. It is a topologically connected, classified and attributed, labeled network of geographic entities. Each road consists of intersections, road classifications, names, speed limits, overpasses, and lanes. OpenStreetMap has provided a very rich set of linked, geographic data.

And beyond the data, it has built a community of invested members that careful capture, annotate, and cultivate the data in OpenStreetMap. This means that the data is captured, but also updated and maintained (ideally) with new information, changes, and other entities such as parks, buildings, bus stops and more.

So Steve convincingly pointed out that automatic road identification was interesting, it would circumvent these other benefits of what OpenStreetMap was working on: rich connected data, and a community of volunteers that would build and maintain the dataset. Road detection has a tendency to generate a large amount of data in an area that no one is actively working on the data. So you can gain what appears to be good coverage but limited local knowledge on intersections, names, and other metadata.

I don’t think that these are insurmountable problems. The act of capturing GPS data can be tedious, inaccurate, or not readily possible in remote areas. Road detection can provide this data and users can work afterwards to improve the data, either remotely or using even simpler mobile devices that a user can annotate features without having to capture the entire geographic road line.

So my comment the other day was about pointing out an interesting change in message and strategy. I applaud the work of Steve and the Bing team in developing new tools, but there are many other pieces that warrant consideration. Steve even asked often if the bulk import of the TIGER/Line data was good or bad for the US community. In the end, I believe it was the right thing as it provided a canvas of data using open data that provided a validity to skeptics that OpenStreetMap was viable and valuable.

Now that OpenStreetMap has become increasingly adopted by the world’s largest providers and users of data it is time to evaluate new tactics for gathering and maintaining data. However this can’t be at the expense of what made OpenStreetMap a success for the past 5+ years.


08 Aug 2010

State of the Map US

Published in Conference, OpenStreetMap


spacer Unfortunately I missed State of the Map in Girona, Spain this year. I seem to be making every other one – which means I’ll be attending the first State of the Map US being held in Atlanta this coming weekend.

The United States had a much later start in OpenStreetMap than Europe and other parts of the world – but we also have a long history of open-government data that created less of a demand or need for grassroots mapping. However, the benefit of this culture is that the US government, from the local and state levels, all the way to the Federal level, are interested in utilizing OpenStreetMap and connecting with the community.

I’ll be speaking on Sunday about the necessity, and benefits, of moving beyond merely open data to instead focus on collaborative data gathering and mapping. Through our work on GeoCommons, OpenStreetMap, and deployments of data sharing to Afghanistan, Pakistan, and Haiti and how citizens with organizations need to engage together in dicussing the need for data, methods for collectively gathering, and ways to open share and capture feedback in order to improve the overall quality as well as impact of open data.

OpenStreetMap has understood this from the beginning in promoting through “mapping parties“. These parties had the explicit goal of mapping a region and training new mappers, but implicitly they created a community of like-minded local citizens that self-identified their desire to spend time and energy in working together to gather and open data. It is basic initiatives like this that are vital at the local and regional levels.

If you’re near Atlanta, or can come by to the conference, hope to see you there. And regardless, think about how you can connect within your community of interest to start a dialogue and collaboration around open data.


15 Jan 2010

Haiti Mapping

Published in Data, GeoCommons, OpenStreetMap


spacer The last 2 days have been filled with coordinating various efforts in gathering information and volunteers responding to the massive Haiti earthquakes of January 12. The analysis team at FortiusOne has put together a news dashboard highlighting the event and current response efforts.

There have been several tremendous groups that have actively been contributing data and tools both with remote developers and responders on the ground. CrisisMappers, CrisisCommons, Ushahidi, OpenStreetMap, just to name a few.

Many data providers have been making their data freely available. This is most notable when looking at Mikel’s screenshots of OpenStreetMap before the quake and after volunteers began tracing over historic maps and newer satellite imagery from Digital Globe and GeoEye.

Other efforts:

  • Ushahidi Haiti is crowd-sourcing reports. You can send a text message to 447624802524, send an email to haiti@ushahidi.com, or send a tweet with the hashtag/s #haiti or #haitiquake.
  • The CrisisCommons Wiki has a list of available data and organizations
  • Sahana has a form to list offices and organizations that are working on the ground
  • GeoCommons search for Haiti has all the datasets and maps that people have contributed for download as Spreadsheet, Shapefile, KML, and more
  • OpenStreetMap’s Project Haiti has a list of datasets and people tracing data

05 Jan 2010
Google, Mountain View, CA

excited about in 2010

Published in Geo, Mobile, OpenStreetMap


As always, each new year brings a refreshed feeling of excitement. Perhaps its the long holidays and copious amounts of food, family and fun, or seeing a magic new number on the calendar that makes it feel like “The Future!”, or just a desire to take advantage of an allowed re-emergence of self and goal setting. Of course, time isn’t discontinous, so 2010 isn’t disconnected from the current continuum of development and trends – but it’s still worthwhile to take the time to step back and consider where we are and where we’re going.

Mashable and James, amongst many others, have excellent predictions that will and won’t happen in 2010. Generally they are good insight into trends in the geo and mobile space, although I will take up counterpoint to some of his suppositions on File Formats, Interfaces, OpenStreetMap and Augmented Reality.

File Formats and Interfaces

Geo is definitely becoming mainstream – everyone in my family has a PND, uses Google Maps, and are asking about various location sharing applications. In the next year we’ll see geo become part of the assumed infrastructure, like the timestamp on a post or article, the location will be embedded.

I don’t think TAG (Twitter, Apple Google), as James puts it, will be the only location sharing services. They, along with even more used Facebook, will definitely be the general public interface to location query and sharing – but just because of this reason alone they will have to be very generic, leaving room for specialized location based services to still thrive in niches. FourSquare offers ‘gaming’ or Flickr visual media, and others for music, drinking, sight-seeing, and house finding. They will leverage TAG, or at least TG.

Apple is like the Nintendo of consumer technology – more interested in providing an integrated, compelling experience, and privacy, before full open-ness and engaging with the developer or geek. They’ll still have API’s, but not something like OpenSocial, GeoRSS, or FireEagle integration.

The iPhone, and to lesser extent Android, have been revolutionizing mobile devices. They are truly providing windows into the rest of the web of data combined with the real world. It’s natural for geopatial tools to move into these interfaces, but like any good user experience it won’t be the same capabilities you find on a desktop or browser application. The utilities will be specialized for the small screens, finger inputs, and out-and-about tasks.

For file formats, the Shapefile, unfortunately, isn’t near EOL. Too many tools only speak shapefile, and there is numerous legacy data that is still only available in Shapefile. Sites like GeoCommons offer alternate formats for all the data, but that still won’t remove this basic format. Only when there is a truly open, license free, API to File GeoDatabases (FGDB), and every off the shelf tool can talk that API or Spatialite, will Shapefiles begin disappearing out.

GeoRSS and/or KML, on the other hand, will be in every service that does anything Geo. Looking at any iPhone App review that includes KML (or doesn’t) brings up this point. Near enough everyone has Google Earth on their desktop, and Google is making big pushes in the utilization of Google Earth Plugin for in-browser virtual globes.

Visualization Technologies

To date, we’ve been stuck with either Flash or JavaScript DOM magic (and yes, Silverlight is out there too) in order to do data and geospatial visualization in the browser. As I mentioned, Google has been pushing Google Earth Browser, but also more generally they released O3D, a modern incarnation of X3D, providing for more general capabilities for creating 3D browser experiences. VRML lives!

More recently, there has been a resurgence in vector graphics that don’t rely on proprietary technologies or additional plugins. SVG and Canvas support is pretty widely supported except in the infamous Internet Explorer (which I hear is still being used even today). Examples such as ProtoVis, Cartagen and Tom Carden’s experiments definitely demonstrate that SVG is just on the cusp of being able to do a majority of compelling visualizations capabilities.

Another driver for alternative visualization platforms is the drive to mobile device integration. I don’t see Apple allowing Adobe onto the iPhone anytime soon, and even Android doesn’t have support. What types of visualization make sense is still a very open question – but whatever they are will be done with something like SVG.

Geo Data Skirmishes

James suggests that OpenStreetMap “won’t dominate”. While it won’t dominate, I disagree that it won’t continue to be extremely successful.

Google has recently moved to gathering their own data. They still have a long way to go, with many, many errors in roads, areas, addresses, and businesses and they’re using the crowd to help clean it up. Google is in fact proving the crowd-sourced model. It will be successful. Google is doing it with Google’s data, so there is no positive external benefit to that work – so to the industry it just looks like another data provider. However, with this proven model OpenStreetMap will succeed since any effort built into OSM has a positive benefit to anyone else.

However, there is a major difference in the trajectory OpenStreetMap is taking in the United States compared with Europe and other regions. In most other countries, the governments had very draconian licensing and as such OpenStreetMap was creating data from blank areas – starting from scratch, and building a community of volunteers along the way.

By contrast, in the US a vast majority of the data is free, and becoming more available everyday under the new administration. Therefore the US has a broad coverage of decent data without having first built the user community. So the difficulty here is both in building out community, as well as engaging companies that can do the same thing on their own while retaining proprietary rights to the data.

What’s fascinating, and what signals the ultimate long term success of OpenStreetMap, is that US state, local, and federal government agencies themselves are engaging with OpenStreetMap. They are investigating how to put their data directly into OSM, and possibly even re-incorporate updates and modifications back to their own infrastructures. Some are even considering using OSM toolset as their infrastructure. OpenStreetMap is going through some growing pains with respect to licensing, maintenance, and community – but all necessary steps in moving from a small cadre of hackers to a global, public project.

As we see an increase in open government, specifically driven by the US Administration’s directives, as well as other initiatives such as INSPIRE, this embrace and utilization of open platforms, and repositories, for sharing, federation, and syncronization of data will increase.

And as for augmented reality, it won’t be as big as you think… yet.


« Previous Entries