Planet Geospatial

BoundlessAnn’s Perspective on FOSS4G 2014

FOSS4G 2014

As Paul Ramsey mentioned, last week almost 900 members of the open source geospatial software community came together in Portland at FOSS4G 2014. We were proud to sponsor, privileged to participate in over nine presentations and nine workshops, excited about our new QGIS offerings, and pleased to see even greater interest in our PostGIS,  GeoServer, and OpenLayers offerings during the conference.

The power of Spatial IT resonated throughout the conference as participants were able to highlight their projects and unique use cases of open source geospatial tools to solve a wide variety of technical and business problems.

Highlights from our sessions

Paul conducted a very useful session on how to convince managers to embrace PostGIS and replace proprietary database offerings. The blend of technical and business elements in Paul’s talk spoke about the need to not only use the best available software, but also the continuing need to educate organizations about the value derived from using open source software.

Jody Garnett helped review the new and noteworthy features in GeoServer introduced over the past year. Since GeoServer is part of the core of OpenGeo Suite, it’s always promising to see new support for new standards like WCS 2.0 and new formats like GeoPackage and NetCDF become part of the software.

Andreas Hocevar helped describe what’s new and how to get started with OpenLayers 3. His talk provided an overview from a user’s perspective and covered common use cases and new features to help developers get comfortable with integrating spatial information into web applications.

The LocationTech events highlighted the ability of the community to truly embrace cooperation in the interest of advancing common projects and common goals.

FOSS4G is about community

A true sense of community, however, was the best part of the conference. There was a great feeling of camaraderie throughout the weeklong event. All of the presenters and booth participants, regardless of affiliation, were joined together by the common cause of promoting the value of and expanding the use of open source tools to reduce the cost of legacy GIS implementations and escape the monolithic, proprietary software options that dominate the industry.

Nothing drove the open source message home further for me than the train ride I took from the the magnificent World Forestry Center back to my hotel after the gala on Thursday evening. Seated on the train, my badge now tucked in my purse, I was relaxing on the quiet train and became captivated by a conversation amongst three gentlemen seated just a few rows away. The three were discussing their week at FOSS4G, which seemed a very positive experience for all. And then, one of the men observed, “Open source is really becoming a standard in GIS. Even Esri was in attendance at FOSS4G.”

Well, I suppose that is just the point for the community — if you can’t beat ‘em, join ‘em! Whether Esri is genuine in their open source support or not remains to be seen but what is evident is they can no longer afford to ignore open source. Open source is a part of the geospatial software ecosystem and will continue to grow and provide more affordable opportunities for people to expand the technology into critical business and IT applications throughout their organizations.

I’m already planning for FOSS4G 2015 in Seoul!

The post Ann’s Perspective on FOSS4G 2014 appeared first on Boundless.

The Map Guy(de)MapGuide tidbits: Running 32-bit MapGuide on 64-bit Linux

We still don't have our mythical 64-bit build of MapGuide on Linux yet. So in the meantime, should you want to run the 32-bit CentOS or Ubuntu builds of MapGuide in their respective 64-bit versions, here's the packages you will need to have installed beforehand.

On 64-bit Ubuntu: Just install the ia32-libs package

On 64-bit CentOS: You will need to install the following packages:
  • glibc.i686 
  • libstdc++.i686 
  • expat.i686 
  • libcurl.i686 
  • pcre.i686 
  • libxslt.i686 
  • libpng.i686
This will satisfy the dependencies required by your 32-bit MapGuide, its bundled Apache HTTPD Server and PHP.

The bundled Tomcat and Java wrapper API has not been tested under this environment at the time this post was published, so to take a logical shot in the dark, you probably need to just install the respective 32-bit JVM package for the Tomcat and Java wrapper API to work. If it turns out I'm wrong about that, please do correct me in the comments below.

LiDAR NewsAutomated 3D Feature Extraction

The research undertaken has explored the energy function to solve various tasks, such as LiDAR data filtering, Continue reading →

Click Title to Continue Reading...

Directions MagazineUSGS: We Will Rock You - Geologic Map Day

Directions MagazineSmarter working on the road with new TomTom PRO 8 series driver terminals

Directions MagazineArithmetica Launches SphereVision 360 Degree Video Mapping Software

Directions MagazineVisit 1Spatial At the ‘Geo: The Big Five’ Big Data Event

Directions MagazineGIS Market in the APAC Region 2014-2018: Key Vendors are Autonavi Holdings, HERE, Hexagon and Navinfo

Directions MagazineGEO-Energy Summit to Focus on National Security and Energy Independence

Directions MagazineEvery GIS Student’s Nightmare: Finding a Research Topic

Paul RamseyPostGIS Feature Frenzy

A specially extended feature frenzy for FOSS4G 2014 in Portland. Usually I only frenzy for 25 minutes at a time, but they gave me an hour long session!

PostGIS Feature Frenzy — Paul Ramsey from FOSS4G on Vimeo.

Thanks to the organizers for giving me the big room and big slot!

GIS in XMLWebGL with a little help from Babylon.js

Most modern browsers now support HTML5 WebGL standard: Internet Explorer 11+, Firefox 4+, Google Chrome 9+, Opera 12+
One of the latest to the party is IE 11.


Fig 2 – html5 test site showing WebGL support for IE11

WebGL support means that GPU power is available to javascript developers in supporting browsers. GPU technology fuels the $46.5 billion “vicarious life” industry. Video gaming revenues surpass even Hollywood movie tickets in annual revenues, but this projection shows a falling revenue curve by 2019. Hard to say why the decline, but is it possibly an economic side effect of too much vicarious living? The relative merits of passive versus active forms of “vicarious living” are debatable, but as long as technology chases these vast sums of money, GPU geometry pipeline performance will continue to improve year over year.

WebGL exposes immediate mode graphics pipelines for fast 3D transforms, lighting, shading, animations, and other amazing stuff. GPU induced endorphin bursts do have their social consequences. Apparently, Huxley’s futuristic vision has won out over Orwell’s, at least in internet culture.

“In short, Orwell feared that what we fear will ruin us. Huxley feared that our desire will ruin us.”

Neil Postman Amusing Ourselves to Death.

Aside from the Soma like addictive qualities of game playing, game creation is actually a lot of work. Setting up WebGL scenes with objects, textures, shaders, transforms … is not a trivial task, which is where Dave Catuhe’s Babylon.js framework comes in. Dave has been building 3D engines for a long time. In fact I’ve played with some of Dave’s earlier efforts in Ye olde Silverlight days of yore.

“I am a real fan of 3D development. Since I was 16, I spent all my spare time creating 3d engines with various technologies (DirectX, OpenGL, Silverlight 5, pure software, etc.). My happiness was complete when I discovered that Internet Explorer 11 has native support for WebGL. So I decided to write once again a new 3D engine but this time using WebGL and my beloved JavaScript.”

Dave Catuhe Eternal Coding

Dave’s efforts improve with each iteration and Babylon.js is a wonderfully powerful yet simple to use javascript WebGL engine. The usefulness/complexity curve is a rising trend. To be sure a full fledged gaming environment is still a lot of work. With babylon.js much of the heavy lifting falls to the art design guys. From a mapping perspective I’m happy to forego the gaming, but still enjoy some impressive 3D map building with low effort.

In order to try out babylon.js I went back to an old standby, NASA Earth Observation data. NASA has kindly provided an OGC WMS server for their earth data. Brushing off some old code I made use of babylon.js to display NEO data on a rotating globe.

Babylon.js has innumerable samples and tutorials which makes learning easy for those of us less inclined to read manuals. This playground is an easy way to experiment: Babylon playground

Babylon.js engine is used to create a scene which is then handed off to engine.runRenderLoop. From a mapping perspective, most of the interesting stuff happens in createScene.

Here is a very basic globe:

<!DOCTYPE html>
<html xmlns="">
    <title>Babylon.js Globe</title>

    <script src=""></script>
        html, body {
            overflow: hidden;
            width: 100%;
            height: 100%;
            margin: 0;
            padding: 0;

        #renderCanvas {
            width: 100%;
            height: 100%;
            touch-action: none;

    <canvas id="renderCanvas"></canvas>

        var canvas = document.getElementById("renderCanvas");
        var engine = new BABYLON.Engine(canvas, true);

        var createScene = function () {
            var scene = new BABYLON.Scene(engine);

            // Light
            var light = new BABYLON.HemisphericLight("HemiLight", new BABYLON.Vector3(-2, 0, 0), scene);

            // Camera
            var camera = new BABYLON.ArcRotateCamera("Camera", -1.57, 1.0, 200, new BABYLON.Vector3.Zero(), scene);

            //Creation of a sphere
            //(name of the sphere, segments, diameter, scene)
            var sphere = BABYLON.Mesh.CreateSphere("sphere", 100.0, 100.0, scene);
            sphere.position = new BABYLON.Vector3(0, 0, 0);
            sphere.rotation.x = Math.PI;

            //Add material to sphere
            var groundMaterial = new BABYLON.StandardMaterial("mat", scene);
            groundMaterial.diffuseTexture = new BABYLON.Texture('textures/earth2.jpg', scene);
            sphere.material = groundMaterial;

            // Animations - rotate earth
            var alpha = 0;
            scene.beforeRender = function () {
                sphere.rotation.y = alpha;
                alpha -= 0.01;

            return scene;

        var scene = createScene();

        // Register a render loop to repeatedly render the scene
        engine.runRenderLoop(function () {

        // Watch for browser/canvas resize events
        window.addEventListener("resize", function () {

If you can see this, your browser doesn’t
understand IFRAME. However, we’ll still
you to the file.

Fig 3- rotating Babylon.js globe

Add one line for a 3D effect using a normal (bump) map texture.

groundMaterial.bumpTexture = new BABYLON.Texture('textures/earthnormal2.jpg', scene);

If you can see this, your browser doesn’t
understand IFRAME. However, we’ll still
you to the file.

Fig 4 – rotating Babylon.js globe with normal (bump) map texture

The textures applied to BABYLON.Mesh.CreateSphere required some transforms to map correctly.


Fig 5 – texture images require img.RotateFlip(RotateFlipType.Rotate90FlipY);

Without this image transform the resulting globe is more than a bit warped. It reminds me of a pangea timeline gone mad.


Fig 6 – globe with no texture image transform

Updating our globe texture skin requires a simple proxy that performs the img.RotateFlip after getting the requested NEO WMS image.

        public Stream GetMapFlip(string wmsurl)
            string message = "";
                HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(wmsurl));
                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                    if (response.StatusDescription.Equals("OK"))
                        using (Image img = Image.FromStream(response.GetResponseStream()))
                            //rotate image 90 degrees, flip on Y axis
                            using (MemoryStream memoryStream = new MemoryStream()) {
                                img.Save(memoryStream, System.Drawing.Imaging.ImageFormat.Png);
                                WebOperationContext.Current.OutgoingResponse.ContentType = "image/png";
                                return new MemoryStream(memoryStream.ToArray());
                    else message = response.StatusDescription;
            catch (Exception e)
                message = e.Message;
            ASCIIEncoding encoding = new ASCIIEncoding();
            Byte[] errbytes = encoding.GetBytes("Err: " + message);
            return new MemoryStream(errbytes);

With texture in hand the globe can be updated adding hasAlpha true:

var overlayMaterial = new BABYLON.StandardMaterial("mat0", nasa.scene);
var nasaImageSrc = Constants.ServiceUrlOnline + "/GetMapFlip?url=" + nasa.image + "%26BGCOLOR=0xFFFFFF%26TRANSPARENT=TRUE%26SRS=EPSG:4326%26BBOX=-180.0,-90,180,90%26width=" + nasa.width + "%26height=" + nasa.height + "%26format=image/png%26Exceptions=text/xml";
       overlayMaterial.diffuseTexture = new BABYLON.Texture(nasaImageSrc, nasa.scene);
       overlayMaterial.bumpTexture = new BABYLON.Texture('textures/earthnormal2.jpg', nasa.scene);
       overlayMaterial.diffuseTexture.hasAlpha = true;
       nasa.sphere.material = overlayMaterial;

True hasAlpha lets us show a secondary earth texture through the NEO overlay where data was not collected. For example Bathymetry, GEBCO_BATHY, leaves holes for the continental masses that are transparent making the earth texture underneath visible. Alpha sliders could also be added to stack several NEO layers, but that’s another project.


Fig 7 – alpha bathymetry texture over earth texture

Since a rotating globe can be annoying it’s worthwhile adding a toggle switch for the rotation weary. One simple method is to make use of a Babylon pick event:

        window.addEventListener("click", function (evt) {
            var pickResult = nasa.scene.pick(evt.clientX, evt.clientY);
            if ( != "skyBox") {
                if (nasa.rotationRate < 0.0) nasa.rotationRate = 0.0;
                else nasa.rotationRate = -0.005;

In this case any click ray that intersects the globe will toggle globe rotation on and off. Click picking is a kind of collision checking for object intersection in the scene which could be very handy for adding globe interaction. In addition to, pickResult gives a pickedPoint location, which could be reverse transformed to a latitude,longitude.

Starbox (no coffee involved) is a quick way to add a surrounding background in 3D. It’s really just a BABYLON.Mesh.CreateBox big enough to engulf the earth sphere, a very limited kind of cosmos. The stars are not astronomically accurate just added for some mood setting.

Another handy BABYLON Feature is BABYLON.Mesh.CreateGroundFromHeightMap

/* Name
 * Height map picture url
 * mesh Width
 * mesh Height
 * Number of subdivisions (increase the complexity of this mesh)
 * Minimum height : The lowest level of the mesh
 * Maximum height : the highest level of the mesh
 * scene
 * Updatable: say if this mesh can be updated dynamically in the future (Boolean)

var height = BABYLON.Mesh.CreateGroundFromHeightMap("height", "textures/" + heightmap, 200, 100, 200, 0, 2, scene, false);

For example using a grayscale elevation image as a HeightMap will add exaggerated elevation values to a ground map:


Fig 8 – elevation grayscale jpeg for use in BABYLON HeightMap


Fig -9 – HeightMap applied

The HeightMap can be any value for example NEO monthly fires converted to grayscale will show fire density over the surface.


Fig 10 – NEO monthly fires as heightmap

In this case a first person shooter, FPS, camera was substituted for a generic ArcRotate Camera so users can stalk around the earth looking at fire spikes.

“FreeCamera – This is a ‘first person shooter’ (FPS) type of camera where you control the camera with the mouse and the cursors keys.”

Lots of camera choices are listed here including Oculus Rift which promises some truly immersive map opportunities. I assume this note indicates Babylon is waiting on the retail release of Oculus to finalize a camera controller.

“The OculusCamera works closely with our Babylon.js OculusController class. More will be written about that, soon, and nearby.
Another Note: In newer versions of Babylon.js, the OculusOrientedCamera constructor is no longer available, nor is its .BuildOculusStereoCamera function. Stay tuned for more information.”

So it may be only a bit longer before “vicarious life” downhill skiing opportunities are added to FreshyMap.



Fig 11 - NEO Land Surface average night temperature

Nathan's QGIS and GIS blogQGIS atlas on non geometry tables

This is proof that no matter how close you are to a project you can still miss some really cool stuff that you never knew or considered was possible.

The problem to solve:

You have a CSV with a row of colours. Each row should be a new map and each column is the colour for that feature.

This is example of that kind of input

A       B
#93b2f3 #FF0000 
#dfbdbb #FF0000
#f9d230 #FF0000

This questhion was asked on GIS.SE this morning. When I first saw it I had no idea it was even possible, I was thinking along the same lines as the person asking, that it would have to be done with Python. Not hard, but a lot harder then something built in and I put it in the too hard basket. I thought the atlas can almost do that, almost but not really.

Well almost was wrong. It can.

Note: You will need QGIS 2.5 (2.6 when released) for this to work

Lets make some cool maps! (and go to GIS.SE and upvote Nyalls answer)

First open your vector layer and the CSV. Don't worry about style just yet, we will do it later.

Create a composer and add your map.

Here comes the first part of the trick.

Enable Atlas and set the coverage layer to the CSV layer. Wait? What? That doesn't make any sense. If you think about it for a while it does. We need a map for each row (or "feature") in the CSV and atlas does just that.


How do we style the features? Well here is the other part of the trick. In 2.6 there is a magic expression function that returns a field value from another feature. And it's as simple as attribute( $atlasfeature , 'A' ) - give me the attribute from the current atlas feature for field 'A'. Simple.

First we categorize our features so we have a symbol for each feature. I'm using a sample layer I have but you can understand how this works. The first feature is A and the other is B, etc, etc


Now to use another awesome feature of QGIS. The data defined symbol properties (and labels too). Change each symbol and define the colour data defined property. Using attribute( $atlasfeature , 'A' ) for the first one and attribute( $atlasfeature , 'B' ) for the second.


That is it. Now jump back over to your composer and enable Atlas preview.



Bam! Magic! How awesome is that!

Now my other thought was. "Ok cool, but the legend won't update". I should learn by now not to assume anything. The legend will also update based on the colours from the feature.



How far can we take this. What if you need the label to match the colour. Simple just make the label text look like this:

<h1 style='color:[% "A" %]'>This is the colour of A</h1>


Heaps of credit to Nyall and the others who have added all this great stuff to the composer, atlas, and the data defined properties. It's not something that you will do every day but it's great to see the flexibility of QGIS in these situations.

You can even make the background colour of the page match the atlas feature


but don't do that because people might think you are mad ;)

GeoIQ BlogCatching Up With the States

NSGIC 2014 Annual Conference Logo

Last week I was in sunny rainy Charleston, SC attending the National States Geographic Information Council’s annual meeting. NSGIC’s mission is to promote coordination of geospatial activities within and between states and to advocate for effective policy at the national level. I made many connections within the amazing community of professionals who represent 36 states at the conference. It is of no surprise that these states share so many common challenges. The bedrock of their data, things like road centerlines, address points, parcels and boundaries are all seeing increasing demands for use. Those who use the data need it to be absolutely accurate as there is little margin for error in domains like emergency response.

How do the data owners allow, for example, a 911 call-center operator who finds a problem with an address to log an error that can be easily acted upon right at that data point? Or, how can they allow that person or a citizen stakeholder to propose a correction or addition to the data while maintaining its authoritative nature?

The challenge of collaboration around authoritative data is one that we think about a lot on the Open Data team. Today we enable information to be shared within organizations and governments as well as with the general public. For example, the State of Wyoming uses ArcGIS Open Data to publish their highway network dataset. However, there is so much room to grow in order for Open Data to help complete the circle of publishing, feedback and improvement. I know that the whole Open Data team is excited to tackle this challenge as we continue to conduct research and develop our product. And, personally, I’m looking forward to catching up with all the states again in Annapolis.

Daniel Fenton is a Product Engineer on the Open Data Team

JGrass Tech TipsuDig 1.4.0 on Osgeo4W + BeeGIS and the Nettools (and java 7) + 32/64bit

I have been made aware of the fact that java is outdated on Osgeo4W and that I am the maintainer. So I checked into it and noticed that I never upgraded uDig to 1.4.0 (and of which I know I am the maintainer).

Also I was told that the Nettools didn't work very well and even BeeGIS!!!

So I took the time to do a huge upgrade of everything and and very important: also added the 64 bit versions for Osgeo4W. This is almost mandatory for those that do raster data processing with uDig.

So pick your software: 32bit installer or 64bit installer.

Enjoy!!! :-)


Spatial Law and PolicySpatial Law and Policy Update (September 19, 2014)


Data Quality


    Spatial Data Infrastructure/Open Data

    Public Safety/Law Enforcement/National Security

Technology Platforms


No-Fly Zone 16 46 0 How “drone” safety rules can also help protect privacy.  (Slate) Informative article from 2013 that was recently republished. 

Internet of Things/Smart Grid/Intelligent Transportation Systems

Remote Sensing



Big Tech at Bay  (Financial Times)  Informative discussion on issues that will have a significant impact on what the geospatial community will look like in the future. 

Spatial Law and PolicyWhat is Spatial Law and Why is it Important?

Last week (September 18, 2014) I had the opportunity to speak at the Yale University Information Society Project (ISP) "Ideas" Lunch.  A copy of my presentation, "What is Spatial Law and Why is it Important" can be found here.  

LiDAR NewsSpectacular Trip

I just spent the two most spectacular and exciting outdoor adventure days of my life. Continue reading →

Click Title to Continue Reading...

VerySpatialA VerySpatial Podcast – Episode 479

A VerySpatial Podcast
Shownotes – Episode 479
21 September 2014

Main Topic: Chris Tucker, Symposium chair of Geography2050

  • Click to directly download MP3
  • Click to directly download AAC
  • Click for the detailed shownotes


  • Ordinary Life by Lewis Hurrell

  • News

  • Boeing and Space X to taxi US astronauts
  • NOAA tests hurricane hunter UAVs
  • OpenDroneMap (
  • Google returns to MyMaps name
  • ArcGIS Runtime 10.2.4 rolling out at the end of the month

  • Web Corner

  • cKan – Open Source Clearinghouse Software

  • Main topic

  • This week we talk to Chris Tucker about the American Geographical Society‘s 2014 fall symposium, Geography2050: Mounting an Expedition to the Future, 19 November in New York City.

  • Events Corner

  • AutoCarto 2014: 5-7 October, Pittsburgh, PA
  • Geo for Good User Summit: 21-24 October, Google, Mountain View, CA
  • Esri International User Conference: 20-24 July, San Diego, CA – Call for Papers now open
  • SpatialistsThe Data Worker’s Manifesto

    This article is a re-post of an article that first appeared on


    Last week I gave a talk at the 8th instalment of the GeoBeer series on EBP’s Zurich-Stadelhofen premises and sponsored by EBP and Crosswind. It was titled State of the Union: Data as Enabling Tech‽

    You can check out the whole slidedeck on my private website (The slides are made with impress.js and best viewed in Chrome. Please ignore my horrible inline CSS..)


    I’m quite sure it’s not best practice to give one’s talk an unintelligible title. Nevertheless, that’s what I did, so let me explain what the different parts mean:

    I chose “state of the union” as a fancy way of expressing that I’m directing my talk primarily at fellow geoinformation and data people.

    With “data” we usually refer to raw observations of some phenomenon. We’ll discuss later, how helpful that definition turns out to be.

    “Enabling tech” would usually expand to “technology” and the term is used to denote a technical development that makes novel applications possible in the first point. However, in the context of this talk it may be worthwhile to keep the 2nd potential meaning of the stub “tech” – “technique” – in mind, as well.

    Finally, the  is called an interrobang and nicely reflects the semantic ambivalence of combining ? and ! into one punctuation mark.


    Sometime in the last decade, we as a society have moved from a situation where data was usually scarce to one where (many forms of) data are abundant. Where before, the first step of analysis was often one of interpolation between valuable data points, we now filter, subsample, and aggregate our data. Not all domains are the same in this respect, obviously. But I think the generalisation pretty much holds, as (often ill-applied) labels such as “big data” or “humongous data” indicate. (Well, the latter is obviously a joke; but think about why it works as such.)

    Big drivers of this development are a) the Web and its numerous branches and platforms and b) smartphones, tablets, phablets and what have you, or more broadly speaking: embedded sensors, GPS loggers, tracking and fleet management systems, automotive sensors, wearables, ‘self-tracking’ or ‘quantified-self’ technology, networked hardware such as appliances (think Internet of Things) and the like.

    In what follows I’m going to talk primarily on crowdsourced data. (In other contexts, crowdsourced (geographic) data is also called e.g. Volunteered Geographic Information, VGI, (a term fraught with problems), or User-Generated Content, UGC.) But some of the assertions also hold for data in general.



    Crowdsourced data, i.e. data that:

    – is gathered from many contributors,

    – in a decentralised fashion,

    – following (at best) informal rules and protocols,

    – voluntarily, unknowingly or with incentives,

    has some issues.

    The large-scale advent of this crowdsourced data of course coincides with the development of the so-called Web 2.0 (in German also referred to as the ‘participation Web’), where anybody could not just be a consumer, but also (at least, in theory) a producer, or: a produser. Or so we were told.


    But: crowdsourced data is biased

    This map shows OpenStreetMap (OSM) node density normalised by inhabitants (compiled by my OII colleagues Stefano de Sabbata and Mark Graham).

    Assuming (somewhat simplifying) that the presence of people effects the build-up of infrastructure, in an ideal world this map would feature a uniform colour everywhere. However, there are regions where relative data density in OSM exceeds that of other regions by 3–4 orders of magnitude! Compare this to the density of placenames in the GeoNames Gazetteer!

    Clearly, offering an “open platform” and encouraging participation is not enough to really level the playing field in user-generatation of content. In some regions people might not have the means (spare-time, economic freedom, hardware, software, education, technical skills, access to stable (broadband) Internet, motivation) to participate or they might e.g. have reservations against this kind of project or the organisations behind it.

    Spatially heterogeneous density is just one example of bias we find in crowdsourced data. Another one is termed user contribution bias, where a very small proportion of contributors (think Twitter users, Flickr photographers, Facebook posters, …) creates a large proportion of the data. Depending on the platform we see very lopsided distributions with few percent of users being behind a large share of the content. In his Master’s thesis, Timo Grossenbacher found that in his sample of Twitter, 7% of the users created 50% of the tweets. Despite all techno-optimism: clearly, not everyone is a produser and clearly not all contributors create equal amounts of content!


    Talking of different kinds of bias: OSM has also been found sexist, for example. OSM contributors (like in many crowdsourcing initiatives) are, as a tendency, young, male, technologically minded, with above average education. Narrow groups of contributors may, inadvertently or consciously, favour their own interests in creating content.

    OSM’s “bottom-up data model” (basically, the community discusses and decides what is mapped how) gives contributors allocative power, i.e. what most people (or the most industrious contributors?) adopt as their practice has good chances to evolve into community (best?) practice.




    Further, some patterns in crowdsourced data may be very surprising.

    One example this talk has already touched upon is user contribution bias, where a small group dominates the crowdsourcing activity. A more complicated example of surprising insights hidden in crowdsourced data is in the figure on the left. Remember that in Wikipedia, the self-declared repository for the sum of all human knowledge it’s well known, that the spatial distribution of geocoded and “geocode-able” articles is strongly biased. A map I made with my colleagues at the OII shows that a part of Europe features as many Wikipedia articles as the rest of the world. (By the way, there is this interesting Wikipedia page that discusses all kinds of biases that affect Wikipedia.)

    Now, as the figure shows, despite this known severe lack of content e.g. in the Middle East and North Africa (MENA), only about a third of edits that are made by contributors in that region are about articles in the same region. Surprisingly, a large proportion of MENA’s (in absolute terms low) editing activity is geared towards contributing to articles outside their own region, about phenomena in North America, Asia and Europe. If you expected, as many people do, that contributors edit mostly about phenomena in their immediate environment and that they tend to “fill in gaps” in content, this insight comes as a surprise.

    Cultural, personal (education, careers, family relations, travel, tourism, …), linguistic, historical, colonial, political, and many more reasons may play into this.


    The new abundance of data, the proliferation of open (government) data, APIs and the current popularity of information or data visualisation (infoviz/dataviz) as well as data-driven journalism (DDJ) has led to many more people and institutions obtaining, processing, analysing, visualising and disseminating data.

    While this may be welcomed by data-inclined people in general, unfortunately it sometimes leads to people attaching false meaning to data or to interpreting insights into data that are not supported by it.

    This example shows geocoded tweets in response to the release of a Beyoncé album. In my opinion, while technologically interesting, the visualisation has severe flaws in terms of (re)presentation, cartography and infoviz best practices. But: even more importantly, it utterly fails to mention e.g., that a) Twitter users are a highly biased, small subgroup of the general population, that b) the proportion of geocoded tweets is estimated to be in the very low percent numbers (often, < 3% is indicated!), that c) user contribution bias is likely at play, that d) geolocation may be faulty, etc. etc.


    Finally, this figure shows the result of “ping[ing] all the devices on the internet” according to John Matherly of Shodan. This figure and story went viral, it appeared e.g. on Gizmodo, The Next Web, IFLScience!, and many more.

    Turns out, if you dig a bit deeper, there are some rather important disclaimers: e.g. a very limited window during which the analysis was reportedly carried out and, more importantly, only pinging devices addressed using IPv4, not considering IPv6. You can read about these on this Reddit thread.

    Turns out some countries in Asia that have recently invested heavily into broadband Internet infrastructure and also large parts of Africa where the Internet is mainly used on mobile devices, use IPv6 and thus show up as black holes or rather dark regions on this “map of the Internet”.

    Sadly, the relative lack of access to Internet, content and netizens in Africa is a truth (cf. the OII Wikipedia analyses mentioned above). However, the situation, at least in terms of connected devices is not as dire as this map makes you believe!

    However, I think the very fact that the map played into this common narrative of unconnected, offline regions is an important factor in its massive proliferation (a.k.a. ‘going viral’). Unfortunately, it seems all this sharing happened without discussions on the data source, data collection method, processing steps, and important disclaimers about the data’s validity and legitimacy – and, let’s face it, very little critical reception and reflection on part of the audience, i.e. us.

    The effects? – The original tweet has been retweeted more than 5,500 times! Go figure.


    With these examples in mind, let’s turn to the classic Data-Information-Knowledge-Wisdom workflow or pyramid. In the DIKW mindset, data is composed of raw observations. Only structuring, pattern-detection, and asking the right questions turn data into information. Memorised, recalled and applied in a suitable context, information becomes knowledge. And finally, there’s the wisdom stage that is concerned with ‘why’ rather than ‘what’, ‘when’, ‘where’ and ‘how’ etc.


    Well, turns out, one can argue rather well that ‘raw data’ does not, in fact, exist.

    Data – and I would argue also crowdsourced data – is usually collected with an intent, an application in mind or, if not that, at least with a specific method, from a certain group of people, by a defined group of people, using a certain measuring device. Whether this happens implicitly or explicitly and willingly does not matter in this context. Clearly, however, these factors all potentially affect the applications the data can sensibly be used for.

    So, there goes the title of my talk: ‘data’ may not actually be ‘raw’. And overly focussing on technology and missing out on the underlying technique can be dangerous!


    Putting it bluntly: Unlike this car, data is never general-purpose.







    For all these reasons, and because I care about our profession and about what is being done with data in the society at large (think: data-driven churnalism journalism, evidence-based politics, etc.) I would like to propose:

    The Data Worker’s Manifesto.

    It consists of only few, easily memorised principles:


    Know your data!

    Know the sources of your data, collection methodology, the sample size and composition, consistency, pre-processing steps possibly carried out by others or by yourself, more generally: the lineage, biases, quality issues, limitations, legitimate appliations and use cases. Know all these very well. If you don’t, try to find out. If you can’t be sure, refrain from using the data.


    Discuss data and how it’s being used.

    The Internet and social media are wonderful things where thousands of links are shared. Ever so often you may see an analysis with un(der)-documented input data or methodology.

    Reflect critically what others may share blindly. If you have questions: remember, the Web is a two-way street these days. Gently but firmly ask them and make your sharing of, and investment into, any analysis dependent on the answer.


    Create and share metadata!

    If you do data-based analyses and produce visualisations, always keep track of what you have done with the data: Did you apply filters? Remove (suspected) outliers? Subsample, downsample, disaggregate, aggregate, combine, split, join, clean, purge, merge, … the data? Document your steps and assumptions and share this metadata to give your collaborators and your audience insight into data provenance and your methodology, along with the results.

    If you share your insights in a social media content (e.g. a map as a PNG file), I recommend burning the metadata into the result, i.e. put the metadata somewhere into the content so that it’s hard to remove. Because said content will – at some point – be taken, proliferated, received and analysed out of context. Guaranteed.


    3b is very similar to 3: Create and share metadata!

    Seriously: I know metadata is uncool and not sexy at all to maintain. But nothing good comes from not doing it!




    Experts are valuable.

    While the “end of theory” has been proclaimed, I think the “report of [its] death has been greatly exaggerated”.

    Being, or being in contact with, a domain specialist is still very valuable. Sometimes, especially for harder, i.e. more interesting, analyses, it’s indispensible. In the very least, expert knowledge may save you from doing something silly with data you don’t completely understand.


    We’re in this together.

    I feel we are all still coming to terms with the new opportunities the Web and some of the data-related developments I mentioned provide to us (let alone methodological and computational improvements and societal developments). It can be a bumpy, but in any case an exciting, ride, so let’s buckle up, meet and talk and share our experiences – but that’s obviously why all of you have come to this GeoBeer in the first place!



    I feel that despite all these potential pitfalls we should perceive the abundant data, especially new data types such as crowdsourced and open government data, as huge opportunities!

    I’m convinced that, with the right people and the right mindset, we can do great things, privately or politically, that have the potential to improve our respective environments ever so slightly.

    I feel that Switzerland as a democratic and affluent country provides us with an especially friendly environment to get involved, in business, in research, and in societal goals.

    Thank you all for your attention!

    SpatialistsSunchaser Pictures: Angel City


    I was in L.A. the first time this March, before Esri’s Business Partner Conference and Dev Summit in Palm Springs. So this time-lapse film of L.A. by Sunchaser Pictures came as a nice diversion:

    Click here to view the embedded video.

    (via BoingBoing)

    geomobLDNLineup and details for the Nov 4th #geomob

    The details are now clear for the next #geomob which will take place at 18:30 on Tuesday the 4th of November. We’ll once again be at the BCS at 5 Southampton Street near Covent Garden, many thanks to them for their continued support. Please sign up on Lanyrd so we have a sense of the numbers to expect.

    The format will be the same as ever - each speaker will have 10-15 minutes to share their material followed then by 5 minutes of audience questions. At the end of the evening those who wish can keep the discussion going at a nearby pub over #geobeers generously funded by our sponsors (Thank you!).

    On the 4th of November we’ll be hearing from:

    - Robin Hawkes will make his second #geomob appearance talking about his app to automatically determine building height.

    - Eoin Bailey will tell us about the

    - Dan Stowell will share his feet from a rat project

    - Gail Ramster will lift the curtain on the Great British Public Toilet Map

    - Gareth Wood will tell us about Fuller Maps

    and finally …

    - #geomob founder Chris Osborne returns to update us on the geo efforts of

    As always, at the end of the evening attendees will vote by show of hands for the best speaker who will be awarded with a free SplashMap. The non-winners will have to console themselves with free beer.

    We hope you agree it has all the makings of a great evening and look forward to seeing you at 18:30 on the 4th.

    Many thanks to everyone who attended our event this past week. Richard Fairhurst took home the SplashMaps best speaker prize for his presentation about cycle routing on Congratulations to him, but also to all of the other speakers for their well received presentations.

    Here’s a picture from the evening of Jo Cook presenting #portablegis


    We’re always on the hunt for more speakers, and still have a few slots for our first #geomob of 2015 which will take place on the evening of Tuesday the 13th of January (please sign-up).

    See you in November!

    Ed (freyfogle)

    Free and Open Source GIS RamblingsLabels as text in SVG exports

    Today’s post is inspired by a recent thread on the QGIS user mailing list titled “exporting text to Illustrator?”. The issue was that with the introduction of the new labeling system, all labels were exported as paths when creating an SVG. Unnoticed by almost everyone (and huge thanks to Alex Mandel for pointing out!) an option has been added to 2.4 by Larry Shaffer which allows exporting labels as texts again.

    To export labels as text, open the Automatic Placement Settings (button in the upper right corner of the label dialog) and uncheck the Draw text as outlines option.

    Screenshot 2014-09-20 21.03.26

    Note that we are also cautioned that

    For now the developers recommend you only toggle this option right
    before exporting
    and that you recheck it after.

    Alex even recorded a video showcasing the functionality:

    geomaticblogMapping for the busy cartographer: today moving dots

    This article describes how to make a quick map using some nice services we have at our hands. Nowadays almost everyone can create a maps using services like CartoDB, Mapbox, uMap or even Google My Maps. In this case I’ll show how I used the incredible flexibility of CartoDB to combine some Postgres/PostGIS SQL with CartoCSS to animate some dots on top of OSM cartography rendered by Mapbox.

    This combination is really unique and convenient, other services only allow you to upload or draw some features and decide some static styling for them. But with this combination, using old SQL you can adapt your data for different uses, with CartoCSS the power of the Mapnik rendering library is available and finally, using the awesome Torque capabilities, animation can be added to our map.


    The idea of this map is to represent a crowd of cyclists running along the future bike line by the interior ring of the city of Valencia. Tomorrow Sunday 21 September there will be a march to show the interest of city bikers for this line so my idea was to make people think about how the city look like with this (still imaginary) bike lane full of cyclists, instead of cars.

    Data preparation

    1. Trace a line that represents the route
    2. Transform the line to EPSG:3857
    3. Make the line denser, placing points every 25 meters using the «Densify geometries given an interval» QGIS processing tool
    4. Convert the line to points (again with Processing) and give them these properties:
      • route it will serve to produce more routes in the future
      • lap to separate the points of the route of other points of interest outside the route
      • id to order the rendering of the points


    After uploading the dataset to my CartoDB account I’ve created a new visualization that will have these layers:

    1. A blurred line with the route
    2. A point marking the meeting place to start the activity, just in front of the city hall.
    3. The animated points moving over the route


    Load the layer paseo and customise the SQL. The SQL is quite self-explanatory, first we filter the points over the line and then we use the ST_MakeLine aggregated function to rebuild our original line.

    WITH route AS (
      SELECT *
      FROM paseo
      WHERE route = 1 AND lap>0
      ORDER BY id)
      1 cartodb_id,
      ST_MakeLine(the_geom_webmercator) as the_geom_webmercator
    FROM route
    GROUP BY lap

    The styling of this layer is a simple CartoCSS rule with the only trick of a heavy blur filter.

        line-color: #A53ED5;
        line-width: 8;
        line-opacity: 0.7;
        line-comp-op: lighten;
        image-filters: agg-stack-blur(10,10);

    Moving dots

    This is the most important part of the map, of course. I have a path of points ordered and what I want is to show a more or less crowded ring of people moving. To do it, I’ve created a UNION of ten SELECTs to the table offsetting the id over the full range of id’s. To acieve that I’ve used this long SQL:

    WITH route AS (
        SELECT * FROM paseo WHERE lap>0 AND route = 1
    laps AS (
            cartodb_id, the_geom_webmercator,
        FROM route r1
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 25 THEN id - 25 ELSE id - 25 + 254 END id
        FROM route r2
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 50 THEN id - 50 ELSE id - 50 + 254 END id
        FROM route r3
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 75 THEN id - 75 ELSE id - 75 + 254 END id
        FROM route r4
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 100 THEN id - 100 ELSE id - 100 + 254 END id
        FROM route r5
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 125 THEN id - 125 ELSE id - 125 + 254 END id
        FROM route r6
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 150 THEN id - 150 ELSE id - 150 + 254 END id
        FROM route r7
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 175 THEN id - 175 ELSE id - 175 + 254 END id
        FROM route r8
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 200 THEN id - 200 ELSE id - 200 + 254 END id
        FROM route r9
            cartodb_id, the_geom_webmercator,
            CASE WHEN id  > 225 THEN id - 225 ELSE id - 225 + 254 END id
        FROM route r10
        cartodb_id, the_geom_webmercator,
        ((random()*10-10) + id) id
    FROM laps

    The first with subquery filters the points of the path for this route that feed the next subquery: 10 unions with an id offset separation of 25 points. This subquery is passed to the main query that finally randomizes the id by +-5 positions, that is the order, so the moving dots are not regular, giving a more interesting (anarchic?) effect.

    Using the wizard, the main aspects of the Torque animation are set up. It’s important to use a proper resolution, duration and frame count to adjust the rendering to a nice motion. Afterwards some last touches to the CSS to adjust the compositing operation and specially the trails, leaving just one more rendering of a similar point, instead of the default bigger and more transparent feature.

    Map {
      comp-op: minus;
      marker-fill-opacity: 1;
      marker-line-color: #FFFFFF;
      marker-line-width: 0.5;
      marker-line-opacity: 1;
      marker-type: ellipse;
      marker-width: 6;
      marker-fill: #41006D;
    #paseo[frame-offset=2] {

    Meeting point

    To add a feature to the map to render the meeting point, I manually added a new feature to the layer using the CartoDB editor. This feature will have the property lap=0 so it won’t be on the other layers. The SQL for this layer is just a

    SELECT * FROM paseo WHERE route = 1 and lap = 0

    And the CartoCSS is quite simple with the only important trick to use an external SVG. I’ve used directly the town-hall marker from the Mapbox Maki repository.

      marker-fill-opacity: 0.9;
      marker-line-color: #FFF;
      marker-line-width: 1.5;
      marker-line-opacity: 1;
      marker-placement: point;
      marker-type: ellipse;
      marker-width: 40;
      marker-fill: #3B007F;
      marker-allow-overlap: true;
      marker-file: url(;

    Fixed info window

    On this layer I’ve also configured an infowindow so when you click on the town hall icon you get some data about the schedule for the event.

    Base map

    I started using the Nokia day grey base map offered by CartoDB, but after a couple of iterations on the design, I thought it could be great to use a pale purple base map so I went to Mapbox web and quickly crafted a variation of their Mapbox Streets base layer.

    Other components

    Finally, using the new nice CartoDB layout capabilities I’ve added a simple title for the mobile version of the rendering and a couple of texts and an image (uploaded to imgur) for the logo of the group promoting this activity.


    Well that’s all. You can check the visualization here. The job took like 4 to 5 hours. I finished the first animated version in 2/3 hours but you know, devil is in details and designing is always about iterations and refinement. Anyway I’m quite satisfied on the result and I think it serves for its purpose. Definitely I’ll have the opportunity to review and refine this process, as I imagine more routes and bike marches will happen in Valencia where bikers are winning the battle :-)

    What do you think about this visualization. What do you like and what do you hate? Improvements? I’d love to hear your thoughts and comments to make better maps.

    Update: almost same effect without crazy UNION

    This morning Pedro-Juan asked my, why so many UNIONs? why not using just one long CASE?. After accepting the challenge I did something with CASEs but then realized that I wast just looping over a smaller set of id values, so I could use the modulo function. So the long UNION SQL could be reduced to this easy and simple SQL:

        cartodb_id, the_geom_webmercator, 
        ((random()*10-10) + id%3) id
    FROM paseo WHERE lap>0 AND route = 1

    Wow, that’s so concise compared with the huge SQL above!! Using this id%3 I forced all the values to be just 1,2,3 but with the afterwards random the moving effect is achieved.

    The CartoCSS would need also some changes to allow to “fill” the rendering over all the animation time. Check the differences with the above code, specially the number of offsets added:

    Map {
      comp-op: minus;
      marker-fill-opacity: 1;
      marker-line-color: #FFFFFF;
      marker-line-width: 0.5;
      marker-line-opacity: 1;
      marker-type: ellipse;
      marker-width: 6;
      marker-fill: #41006D;
    #paseo[frame-offset=4] {}
    #paseo[frame-offset=8] {}
    #paseo[frame-offset=12] {}
    #paseo[frame-offset=14] {}
    #paseo[frame-offset=16] {}
    #paseo[frame-offset=18] {}
    #paseo[frame-offset=20] {}
    #paseo[frame-offset=22] {}

    The resultant visualization can be accessed here. Which one do you like more? Do you think it’s worth the simplicity over the (in my opinion) slightly worse effect?

    Archivado en: OSM, PostGIS

    How 2 MapFOSS4G Day 3

    The first day of the conference had a smooth start, orderly checking - and a nice touch of offering a self-serve table for grab-bag items (which should keep waste to a minimum).

    GeoServer Feature Frenzy

    A team effort with Andrea covering a little bit of what makes GeoServer amazing.

    GeoServer Feature Frenzy from Jody Garnett
    A vimeo video is available, since a lot of the fun is in the delivery (and Q&A).

    OSGeo Incubation / Programming in Public

    I had the privilege of doing one talk close to my heart: What makes OSGeo amazing and how we can help new developers put the software in our software foundation.

    Osgeo incubation from Jody Garnett
    There is a vimeo video, sorry about the audio (I was taller then the microphone).

    OSGeo Live Case Study

    Great to meet Alex Mandel who puts so much work in to OSGeo-Live. This talk covers his thesis work, which looked at how OSGeo gets out the message using tools such as OSGeo Live.
    OSGeo Live lets you try out almost everything open source and geospatial:
    • Quickstart gives enough detail to try it out
    • Takes the installation barrier out of trying out our software. (Glad the feature frenzy indicated GeoServer is actually easy to install and configure).
    • Watching the gap between contributors and translators change over time
    • Points for adjusting downloads by country / population size etc...
    A vimeo video of this talk is now available, there were some good questions. If your project is on Source Forge contact Alex and see if he can run the same analysis on your project.

    Other Presentations

    I managed to catch a few more presentations/discussions in the afternoon:
    • Arnulf got a good discussion on certification, which was continued as a Geo4All BOF in the evening. There is a vimeo video of the discussion.
    • Kathleen had a good down to earth talk and open source and avoiding burnout, worth watching the vimeo video when you get a chance.

    AnyGeoPrepare for iOS Sharing Overload as iPhone 6 land in the hands of customers

    TweetSelling out once again, the iPhone 6 is THE most expensive iPhone to date! Ughh.. here we are again, the dreaded iPhone delivery day. Yes indeed, today is the day that iPhone 6 and 6 Plus customers will start receiving … Continue reading

    It's All About DataAhoy! Here There Be Mapnik Beauties

    Today be Talk Like a Pirate Day! A holiday unofficial, to be sure, but some fun to gladden the hearts of salty dogs and landlubbers alike. And here’s another thing that’s fun (and also useful): FME + Mapnik! The MapnikRasterizer gives you fine rule-based styling control to create masterpieces with your vector data, and render the results to raster – ideal for the web and print.


    One of our judges and resident Mapnik expert The Dread Pirate Dmitri gets his arrrrc on at the Esri UC.

    We launched a contest, sailing the seven seas in search of Mapnik masters to bring you inspiration – and the port is now closed, the judges have spoken, and are now off in search of either grog or more data. Perhaps both.

    Why announce our Mapnik contest winner today? Because we had (bad pun alert) avast number of entries! Arrrrr….

    Now, the contest itself was not themed – but since September 19th was approaching… well, we simply couldn’t resist the opportunity to say things like “What’s a pirate’s favorite geometry? ARRRcs!”, swashbuckle around the office, and present you with our top pick and a non-contest favorite – which will both be featured on upcoming FME Beta splash screens.

    Our Winner! Congratulations to Owen Powell

    Many a buccaneer has hailed from the United Kingdom – and so too does our worthy winner, Owen Powell of Arup in the West Midlands with an entry of a decidedly modern aesthetic.


    Owen’s submission – click to enlarge and see the close-up detail.

    OwenFMWUsing open data from Ordnance Survey, Owen prepared this area in Scotland by mosaicking and hill shading digital terrain model data, stripping out zero heights to create a crisp coastline. Then he pulled in vector data for the area, and created two representations of traffic routes – one that followed the road network, visualizing traffic, and another two-point shortest route to and from the central destination.

    When it came to styling the data in the MapnikRasterizer, transparency, offsets, and arc smoothing were the trick to visually balancing the two route representations, and creating the sort of starburst effect. The red routes, showing the actual roadways, are quite transparent, and so as increased traffic adds more copies of the route, the red color intensifies to indicate higher volume. The blue arcs are created with multiple copies of the two-point routes, with a darker and lighter blue, offsets, and maximum line smoothing. The route destinations are highlighted by buffering building polygons in white, and look quite like points of lights in the darkness.


    We Can’t Resist a Pirate Theme

    Though not a qualified entry in our contest, Mapnik Fever took hold of one Danny Barber, who sent us this rendition of the Caribbean. He took his inspiration from a pirate-themed map print he’d bought in Key West years before, and we liked it so much we’re going to feature it on a splash screen too.

    FME Ahoy

    Daniel’s Caribbean map – click for full-size image.

    DanielFMWHis source data is mostly from, plus a few images. The vector datasets are global, and he’s used the specify ground extents option in the MapnikRasterizer to clip out the Caribbean area to include. One bit we thought was particularly clever was using a repeating pattern image with a parchment texture (in two different shades) to fill the polygons and create the antique look to the map. Hillshade data, layered over this with some transparency, gives the relief effect.

    A fun treasure map style font is used for the labels, and a single point (created with the Creator transformer) is placed as an anchor point for the compass rose, which is added by styling the point with a graphic in the MapnikRasterizer. Another nice touch is the map border, which is the result of creating multiple copies of the bounding box, with a variety of offsets, patterns, colors, and opacity.

    Since pirates were some of the earliest mapmakers, we think it’s a rather appropriate application – and aye, she’s a beauty!


    Congratulations to Owen on his win! And thanks to Danny, too, for his great work. We hope this gives you some ideas about what you could be doing with the MapnikRasterizer – in your work and as you can see, just for fun too. So look lively, me hearties, and get started with Mapnik here -

    Introduction to MapnikRasterizer on FMEpedia

    The Secret to Mapnik Mastery, a previously recorded webinar

    5 Ways to Do More with Mapnik on our blog

    And remember, though there be rules in Mapnik, a pirate knows that the code is really more of a set of guidelines…

    Yo ho ho!

    The post Ahoy! Here There Be Mapnik Beauties appeared first on Safe Software Blog.

    All Points BlogGIS Health News Weekly: Why Visit the ER, Sick from Oil Drilling, World Hunger

    Even with Insurance, ERs are Popular The ER is a popular place for Connecticut residents who have asthma. Why not go to a clinic if you have insurance (as more Americans do)? The answer is not geography or money directly, but quality of care, which relates to both. The main... Continue reading

    Technical RamblingsCreating Sculptures of the World with Computers and Math

    The world around us is a complex place. Sometimes you just want to hold a tiny piece of it in your hand — and with some relatively low cost technological investment, you can do so. Using a $500 quadcopter, I have successfully captured images of a building, converted those images to a 3D model, and 3D printed that model — creating a small model of Cambridge City Hall that I can hold in my hand. The process requires no special skills — just some financial investment and time.


    In March of this year, I purchased a Phantom FC40, a $500 everything-you-need quadcopter. This device is easy to fly, comes with a built-in GPS, on-board camera (with a mount for a GoPro), and a remote — everything you need to start doing some amateur aerial photography. (You can see some of my videos in the FC40 Videos and One Minute Onboard to see some of the aerial photography I’ve done.)


    Capturing Photos

    With quadcopter in hand, this weekend, I ventured to Cambridge City Hall. While there, despite the gusty winds, I captured approximately 20 minutes of video, attempting to film the building from as many angles as possible.[1] I was using the GoPro Hero 3+ Black I recently got, but for the purposes of this excercise, the FC40 camera would probably have been sufficient. I shot most footage in Narrow or Medium mode, to reduce the fisheye effect of the very wide angle GoPro lens; for the one section of video I shot in wide-angle, I removed the wide angle aspect using GoPro Studio before using the video.

    Once I had the videos, I reviewed them, doing manual frame-grabs from the video to get coverage. On average, I took one shot for about every two seconds of usable video. (Usable video excludes video where the quadcopter is taking off, where it is facing the wrong direction, where it is flying to get to a different part of the building, where it is occluded by trees, etc.) Another option would be to simply use a program like ffmpeg to extract one frame every second:

    ffmpeg -i ~/Documents/input-movie.mp4 -r 1 -f image2 ~/output/project%03d.jpg

    The reasons not to do this are:

    • When flying the quadcopter, some portions (even in a sub-second window) are better than others. Motion blur is a non-trivial problem, even with 60fps capture rates; targeting manual screengrabs at slower motion, or during a more steady period makes a small but noticable difference.
    • Many of the shots were in the exact same coverage — largely due to the available landing space being all in front of the building. This means that extracting regular shots would have extracted many very very similar images, which would have increased processing time without noticably increasing quality of results.

    Instead, I simply opened each video in VLC, and snapshotted the images that seemed to improve coverage of the building. (Option-Command-S on Mac; in the Video menu.)

    Photo from City Hall Shoot Photo from City Hall Shoot Photo from City Hall Shoot Photo from City Hall Shoot Photo from City Hall Shoot

    Building the Model

    Once done with this, I loaded the images into a program called PhotoScan, the workhorse of this operation.

    PhotoScan is an amazing tool. I say this, having tried a number of other tools — including commercial products like Autodesk’s 123d Catch and open source tools like VisualSFM. Nothing combined the ease of use and functional output of PhotoScan by a long shot. I’m currently using PhotoScan in 30 day trial mode, but despite the relatively steep price tag ($179 for single-user ’standard’ license) for what is only a hobby, I’m pretty well convinced I’m going to have to buy it, because the results are simply amazing.

    With my 328 photos in hand, I added them to a chunk of a PhotoScan workspace, and set up a Batch Process (Workflow -> Batch Process).


    1. Job Type: Align Photos. Change Point Limit to 5000, due to relatively small image size (1920 x 1080); further experiments show that this number ends up creating a better model than either 10000 or 20000 points, in a significantly shorter time window.)
    2. Job Type: Build Dense Cloud.
    3. Job Type: Build Mesh. Ensure that the Source Data is “Dense Cloud”.
    4. Job Type: Build Texture

    Kicking off the build for these 328 photos uses all of the CPU on my laptop for approximately 1 hour. The majority of this time is spent matching photos via the “Align Photos” step. (An attempt with 20000 points took about 4 hours instead of just one.)

    Setting up workflow

    This produces a textured model, fully visible in 3D. In this particular case, anything other than City Hall is pretty … ‘melty’, as I like to call it, since it was only captured incidental to the primary flight objective (city hall itself). From here, you can save the model as a .obj file to use in your favorite 3d program. You can also share it via the web: once exported as a .obj, you can zip the resulting files (including the texture) up, and share for free on Sketchfab: Cambridge City Hall on Sketchfab.

    Photoscan assembledPhotoscan assembled Photoscan assembledPhotoscan Assembled

    My final goal is a physical version of the centerpiece of this model: City Hall. To achieve this, my next step is Meshlab. Meshlab can open the “Wavefront Object (.obj)” file I saved from Photoscan without a problem. Using the “Select Vertices” tool and the “Delete Vertices” tools, I am able to remove the extraneous parts of the model, leaving behind only City Hall itself. Using the “Export Mesh As” functionality, I can export this as a .stl file — the file format that my 3D printer uses.[2]

    Trimming City Hall Trimmed City Hall

    Printing the Model - aka ‘hacking it to work’

    The next step is to load up the STL file. Since I don’t actually know how to rotate my model, I’ll load it into Repetier-Host, so I can do rotation in my plating process. Playing around with the angles, I take my STL file, and find that a rotation of 204 degrees in the X direction, -5 degrees in the Y direction, and -15 in the Z direction appears to give me a reasonably sane looking model. However, it’s still floating a bit above the bottom, thanks to a small portion of the model that is particularly warped due to low photo coverage. I choose to slice the model anyway, using Slic3r to generate gcode.

    3d Printing: Plating

    As expected, the model has generated some pretty bogus first couple layers. However, judicious use of copy paste can help me: Using the Repetier jump-to-layer buttons, I remove the first 3 layers of the model, then duplicate the g-code for the 5th layer (The first ‘real’ layer with more than a few spots of actual content), replacing the Z index with the correct height for the first, second, third, and fourth layers.

    3d Printing Layers

    With these relatively minor modifications made, my model is ready to print; I copy it to my SD card, and send it off to the printer. An hour or so later, I have a 3D sculpture that matches my model pretty well.


    IMG_20140914_203015 IMG_20140914_202925 IMG_20140914_202855

    [1] This can be a challenge in an area where your building is occluded by many trees; shooting shots from the ground can help with this, but I didn’t do any of this for this particular project.
    [2] The model that I produce from Meshlab is frankly pretty crappy. A lot of people with experience in this space could probably trivially improve on what I’ve got; I just don’t know much about 3D Model work. Whenvever I open blender, I start with a cube, and end up with something that looks more like a many-tentacled one of Lovecraft’s imagining than reality. As such, the 3d printing process can be a bit … fraught.

    GIS LoungeWhere is the Cloud in GIS for Watershed Management?

    Barbara Horvatic, the Marketing Manager at GIS Cloud, explains how using a cloud-based GIS can help create a collaborative environment for pulling together and manipulate disparate datasets.   With the recent increase in frequency of flooding events (followed by the usual ‘I haven’t seen this in the past hundred years’ [...]

    The post Where is the Cloud in GIS for Watershed Management? appeared first on GIS Lounge.

    BoundlessPaul’s Perspective on FOSS4G 2014

    FOSS4G 2014

    The world of open source geospatial gathered itself together again last week, as Boundless joined almost 900 developers, users and managers at FOSS4G 2014 in Portland, Oregon. This is the ninth such gathering I’ve attended, and they all have a special local flavour: in this case the flavour of locally-sourced ingredients and micro-brewed beer.

    Exciting Technology

    Each year also has it’s own favored technology, the topic that packs rooms and spills attendees into the halls: in the early days, MapServer and then PostGIS; Java technology like GeoServer and GeoTools; the first open source slippy maps like OpenLayers; new server technologies like Node.js. This year the topics that I observed drawing in the big crowds were vector tiles and drones.

    Vector tiles are close to home, a technology I understand and have experimented with, and if PostGIS, GeoServer and OpenLayers are not producing and consuming vector tiles within a year I will be surprised. There’s lots of demand for the technology, a solid use case in mobile clients, and a clear implementation path forwards.

    Drones, on the other hand, represent a whole new opportunity for open source since, like open source software, cheap drones and sensors democratize information about location. Cheap tools and open software are a great match. Aaron Racicot shared his experience building a quadrocopter for image acquisition for under $700, and Stephen Mather described how he processes drone photos from imagery into a 3D point cloud and textured terrain mesh using open source tools. From here it’s not hard to imagine a future where a digital model of a city could be automatically and continuously updated from the cameras of hundreds of personal drones swooping around.

    New Ideas

    In talks there were some great examples of Spatial IT: building tools that integrate spatial thinking with existing IT architectures and data flows. For example, the improved MapFish Printing module, which may find its way into OpenGeo Suite over the next year, is centered around producing reports (which might contain maps) rather than producing maps (that may have some reporting).

    Similarly, practical and incremental transformation stories from proprietary to open source were common. Sara Safavi presented basic case studies and patterns for integrating open source into proprietary shops: web first, database first, or desktop first, but never all at once. Karl-Magnus Jonsson shared the story of his city’s move from 100% proprietary to 100% open source over several years of gradual transformation: first the web, then the database, and finally then the desktop.

    Growing Community

    On the show floor was the usual collection of companies like ourselves supporting particular open source projects for enterprises, but a few companies in different but important categories: Amazon and OpenShift, promoting the deployment of open source geospatial systems on their platforms; and PlanetLabs, talking about their new sources of earth imaging. As the open source economy grows, the number of companies that generate value indirectly from and for open source is growing along with it.

    Next year FOSS4G will be in Seoul, South Korea, which will give international attendees a great opportunity to learn what is happening in Asia in general and the Korean technosphere in particular. I’m anticipating seeing some truly outstanding work that would otherwise be very hard to discover, it’s going to be a must-attend event.

    Thanks to the organizers in Portland for a seamless and enjoyable event! And thanks for putting a bird on it!

    The post Paul’s Perspective on FOSS4G 2014 appeared first on Boundless.

    VerySpatialAAG2015 – Gail Hobbs Student Paper Competition

    Every year at the AAG conference many specialty groups host student paper/poster competitions. I strongly encourage you to check these out if you are student, or let a student know about them if you are not. Below is the call for submissions for this year’s competition for the Geography Education Specialty Group.

    The Geography Education Specialty Group (GESG) encourages students to participate in the GESG Gail Hobbs Student Paper Competition at the AAG Annual Meeting in Chicago, Illinois, April 21-25, 2015. Students at all academic levels are encouraged to present their recent geography education research in specifically organized GESG Gail Hobbs Student Paper Competition sessions. Students that present papers in the competition sessions will have their meeting registration fees (student member rate) refunded by the GESG. Additionally, up to two $100 prizes will be awarded to the best papers. In order to be considered, students should contact Dr. Herschel Stern by Wednesday, October 29, 2014. Submission and registration reimbursement details will be provided following initial contact. The final abstract submission deadline for the AAG 2015 conference is November 5, 2014. For any questions or for paper submission information, contact Dr. Herschel Stern at MiraCosta College, One Barnard Drive, Oceanside, CA 92056,, (760) 757-2121 x6247.

    Nathan's QGIS and GIS blogExporting QGIS symbols as images

    Ever wanted to export your QGIS symbols as images? Yes. Well here is some Python code that will let you do just that:

    from PyQt4.QtCore import QSize
    from PyQt4.QtGui import QImage, QPainter
    style = QgsStyleV2.defaultStyle()
    names = style.symbolNames()
    size = QSize(64, 64)
    for name in names:
        symbol = style.symbol(name)
        if not symbol.type() == QgsSymbolV2.Marker:
        image = QImage(size, QImage.Format_ARGB32_Premultiplied)
        painter = QPainter(image)
        symbol.drawPreviewIcon(painter, size)
        painter.end()"C:\temp\{}.png".format(name), "PNG")

    Or in 2.6 it's even easier:

    from PyQt4.QtCore import QSize
    from PyQt4.QtGui import QImage, QPainter
    style = QgsStyleV2.defaultStyle()
    names = style.symbolNames()
    size = QSize(64, 64)
    for name in names:
        symbol = style.symbol(name)
        if not symbol.type() == QgsSymbolV2.Marker:
        image = symbol.asImage(size)"C:\temp\{}.png".format(name), "PNG")



    Why? Because we can.

    The Map Guy(de)300,000 views!

    In the race between 300,000 page views and 350 blog posts, the page views crossed the finish line first. For the record, this is my 348th post.

    Thank you all for your continued viewership!

    Between the PolesGuide to the Role of Standards in Geospatial Information Management

    Geospatial information comes from many different sources and is managed by many different providers from mapping agencies to commercial data providers to volunteered geographic information.  To optimize usage of this data there is a need to easily discover and share this information. Standards are essential to enable the sharing of authoritative geospatial data and services and provide significant value to society and government inlcuding enabling global competitiveness of both industry and nations.

    Guide to the Role of Standards in Geospatial Information Management

    SDI Standardization Maturity Model Tiers UNGGIMAt the request of the United Nations Global Information Management (UNGGIM) Secretariat and Expert Committee, three organizations,  the International Organization for Standardization (ISO) Technical Committee 211 Geographic information/Geomatics, the Open Geospatial Consortium (OGC), and the International Hydrographic Organization (IHO) have collaborated to produce a guide that addresses the role of standards in geospatial information management.   It is is intended to be useful for a wide variety of readers, especially in government. These include policy makers, program managers, technical experts and other individuals involved in geospatial information management.  The Guide is comprised of two documents; an executive level guide that assists policy makers and program managers in understanding what capabilities are required to meet current and future needs and a companion document containing detailed technical information on the standards.

    The guide is intended to:

    • Articulate the critical role of standards in geospatial information management
    • Inform policy makers and program managers of the value in using and investing in geospatial standardization
    • Describe the benefits of using open geospatial standards to achieve standardization, data sharing, and interoperability goals.

    Spatial Data Infrastructure (SDI) initiatives worldwide are implementing a common set of international standards for geospatial data. These standards encapsulate geospatial data development, production, management, discovery, access, sharing, visualization, and analysis. As organizations and jurisdictions develop and agree on a common set of open standards, the ability to share geospatial information is enhanced, reducing costs, improving service provision, and facilitating new economic opportunities.

    SDI Standardization Maturity Model

    SDI Standardization Maturity Model UNGGIMCommunity initiatives to share and make geospatial in formation available are typically oriented around Spatial Data Infrastructure (SDI) initiatives. Standards are a critical element of SDI implementation.

    The Guide defines an SDI Standardization Maturity Model that includes different stages or tiers corresponding to increasing levels of capability.

    1. Tier 1 - Share maps over the Web
    2. Tier 2 - Geospatial Information sharing partnerships - share, integrate and use geospatial data from different providers
    3. Tier 3 - Spatially enabling the nation - large scale efforts to develop a comprehensive SDI that provides access to multiple themes of information, applications for using the shared information, and access via mobile, desktop, and other devices
    4. The future - Spatially enabling the Web of data - delivering geospatial information into the Web of data, and bridging between SDI and a broader ecosystem of information systems.

    The Tiers represent a series of steps in an organization’s ability to offer increasing levels of geospatial information and associated services as part of an information community.

    At the beginning of the process (Tier 1), an organization may want to provide access to geospatial information delivered as map images together with a description of them (metadata).

    As the initiative matures, multiple organizations may wish to collaborate to provide a means to share, search for, access, integrate and cooperatively maintain a particular geospatial information layer (such as transportation) from multiple sources using web services (Tier 2). 

    Larger scale initiatives have a goal of establishing a nation-wide coverage of foundation or framework data as part of their National SDI. Foundation data is an accurate set of key geospatial data layers needed most by different users (imagery, elevation, administrative boundaries, transportation, land use, and water features for example). Providing access to this geospatial Foundation Data for a range of application areas is the next level of maturity (Tier 3). 

    Finally, to address emerging needs and leverage new technologies such as crowd-sourcing of geospatial information and big data analytics, a community would focus on delivering geospatial information from SDI environments into the Web of data (The Future).

    Tier 1 Standards

    Each Tier is associated with a set of SDI standards.  The separate Companion document details the specific standards associated with each Tier.

    For example, recommended Tier 1 standards include standards for accessing and displaying geospatial information as images in any browser.

    To encode, communicate and share visualization rules can be implemented using the following standards:

    ISO and OGC standards for catalogue and discovery are widely implemented in national, regional, and local SDIs.

    All Points BlogGIS Education News Weekly: OSGIS Presentations, GeoWeek Events, Assistantships

    OSGIS Presentations Available All recordings from the September 2-3 OSGIS 2014 event, part of Geo For All are now available online.  The theme was “Building up Open Access, Open Education and Open Data for Open Science.” I don't see any index to the presentations but apparently audio... Continue reading