FOSS4G 2014 runs from Sept 8th-12th in Portland, Oregon this year. The conference brings to together developers and others in geospatial open source. This is an opportunity to participate in the rapidly growing open source geospatial community. Last year's FOSS4G 2013 in Nottingham attracted 28 workshops, 180 Presentations and 833 delegates.
Call for papers
Workshop proposals - Due March 15th
At this year's Royal Institution of Chartered Surveyors (RICS) BIM National Conference 2014, Nick Bleukarn, of the Severn Partnership, described a "scan to BIM" project sponsored by RICS to create a BIM model for the RICS headquarters building in Parliament Sq in London. The vision for the project was to scan the building and create a classified (Uniclass2) BIM model that will be kept and maintained rather than thrown away. Severn and RICS collaborated on the specification for the project. RICS wanted to achieve a pretty good level of detail. The deliverable was point clouds and a BIM model classified according to the Uniclass2 standard.
Severn employed total stations and laser scanners. Laser scanners were particularly essential for this project because this is an older building and none of the rooms are "square".
One of the lessons learned is that creating a model from point clouds is very difficult. It is a manual, visual process and is more an art than a science. For the RICS building it required six weeks of modelling to create the BIM model. For older buildings this process has to be “bespoke” modeling, because every project has its unique challenges. It requires an ongoing dialogue and a frequent comparison of the point cloud with the derived BIM model between the client and surveyor.
Deriving measuements from the BIM model
As an experiment RICS then provided the model to four vendors and asked them to calculate some standard dimensions from the model such as gross internal floor space (GIFA). The exercise was not intended to be competitive, so each vendor's result were reported anonymously. The dimensions were measured manually by tape measure for comparison. Three of the vendors visually inspected the model as well as using other techniques in coming up with the estimates. The fourth vendor relied on a completely automated process.
The results were surprising. For example, the gross internal floor space (GIFA) was measured by tape measure to be 4,736 sq-m. The vendors estimates derived from the BIM model ranged from 3,474 to 4,781 sq-m. RICS estimated that assuming a new build cost per sq-m of £4,000/sq-m the GIFA variance would produce a difference in the construction estimate of: £5 million. The variance in other measurements derived from the BIM model were also generally quite wide.
We’re pleased to announce our latest update to neighborhood boundaries, the product that put us on the (map!) in 2006. This release features increased coverage in over 370 cities across 7 new countries. This brings global coverage to more than 127,000 neighborhoods across 40 countries. Our sights are now set on our Q2 update which will include new attributes and expanded coverage.
The main way I was introduced to GIS was through Esri Virtual Campus Courses. I’m not being critical at all, in fact I have to say I thoroughly enjoyed my Esri Virtual Campus Courses and I kind of miss the Learning Pathways. I really do think Esri Virtual Campus courses are an excellent way to learn Esri software. And, those same courses have valuable GIS concepts in them if you can parse them through ArcGIS’s tools. Even so, I have not been taking Virtual Campus courses lately. I am sure Esri hasn’t slowed down on their training seminars, but I have not seen any of interest. But when I saw the revised Creating and Editing Metadata in ArcGIS, designed specifically for 10.2, I was in. I figured it would be repetitive since I did take their “original” course with exactly the same title. I didn’t care, though, because I needed something to get me back to looking at how much better (or maybe make me miss ArcGIS 9.x again) metadata was in 10.2.
What I did not figure is that it would make me feel as if I am a GeoJournalist. From Middle through Undergraduate school, I was very active in various forms of school journalism. It started with my all-time favorite course entitled “Effective Communications” in eight grade that had us editing the middle school paper. I’ll show my age and state that our typing courses were on AppleIIe’s and we were designing the paper on “happy” mac’s in PageMaker. We even cut the color out of mylar. And, the course was the first real photojournalistic tasks I had. It lead me to the sidelines of many sporting events…ok, enough reminiscent history.
So, I’m sitting down to read through the materials and in the key definitions, the metadata definition feels an awful lot like the 5W’s (and an H)…just about data. I’m immediately taken back to my Effective Communications class, discussing how to write interview questions that extract the 5W’s from the subject. Yes, I know, I was reminiscing again. Sorry. But, the point is that metadata really is answering the 5W’s and ESPECIALLY the H about your geospatial data and I like that explanation.
I made it another section in the course and started to reminisce again. I have always been a firm believer in the A Picture is Worth a Thousand Words concept since I am a photographer, and I often think of metadata as the writing on the back of the “analog” print or the Digital Asset Management information for digital photos. I thought they were going to use the tired old writing on the back of a photograph, but instead they basically used captions and said “Each photo can tell you something, but its photographic value is limited if you do not have other information.” Thus, captions. If I had used their examples in my eight grade effective communication class, I would have failed. But I guess they don’t want you to write those thousand words down.
Unfortunately, the rest of the course was downhill from there. Yes, they showed the all-important method of changing from Item Description to your standard of choice, but they didn’t really talk about the tips at the bottom or anything else. In other words, my general frustrations with how ArcGIS handles metadata are still well founded. And, yes, I suppose that translates to still missing ArcGIS 9.x!
One funny part was that Esri HQ seems to have one of the same problems I have. If you edit your metadata in ArcGIS, it reports stuff that you don’t really want to have reported…like file locations. This is something that, if you’re delivering to a client, you will have to update on the final drive you deliver. There certainly are wonderful things they collect (such as the exact version of your software), but the file location is definitely not one of those wonderful automatically collected items. Anyway, I was amused that they blurred out that portion of the data.
My final review is that I suppose I have to thank Esri for a trip down memory lane with this Virtual Campus course. I have a few new ideas of how to define metadata when I’m teaching it. But overall I still find it tragic to still miss ArcGIS 9.x. But, what that really means is that I should be working on building my own interface. When I do accomplish that, I will share.
Summer is a great time for students to focus their efforts on learning more about GIS and developing hands on experience. Start lining up your GIS summer experience. Listed here are summer GIS opportunities. GIS Data Collection Volunteers | Cusco, Peru A small NGO located in Calca, Peru called the Andean Alliance for Sustainable Development […]
The GeoServer team is pleased to announce the release of GeoServer 2.4.5:
This is the final stable release of the 2.4 series:
Now is a good time to plan your upgrade to the 2.5 series. Our extended release schedule provides six month maintenance window allowing you to upgrade when ready.
This release is made in conjunction with GeoTools 10.5.
Yesterday with bipartisan support the U.S. House of Representatives passed the Energy Efficiency Improvement Act of 2014 (HR 2126) which was sponsored by Representatives David McKinley (R) and Peter Welch (D). The bill includes several energy efficiency provisions
Each season has its unique traits. Some are good, some are not so good. This depends upon who you talk to of course. One of the benefits of winter is the view, which brings a barren type of beauty. There is no doubt that leaves and green landscapes are appealing, but as an outdoor enthusiast and trail junky (both on foot and on wheels), I can appreciate the outdoors in every variation. There is a lot to be said for increased visibility too. When the trees are bare, you can see the contour of the land and the flow of the trail.
Sadly, I can also see litter; particularly, plastic bags. When you are hiking down a trail it’s easy to reach down and pick that trash or stray bag up. The easy cleanup opportunity is lost when you are barreling down the highway on the way to work though. It is especially discouraging to see hordes of plastic bags clinging to the tops of trees. These bags have obviously been ejected from passing vehicles to be carried by the wind to their final resting place. I’m sure they are present all year, but the winter draws back the veil of leaves to reveal just how much wasted plastic we generate.
What happens to the rest of the plastic bags that don’t get stuck in our suburban forests? And, what can we do to mitigate our waste? For years I was under the impression that we could not recycle these plastic grocery haulers. I’ve reused them as trash bags, lunch bags and anything else I could think of, but ultimately that just prolongs their life before they end up in the landfill. Luckily, just like a lot of our modern day materials, these can be recycled. So plastic bags really end up in three places (like everything else really).
In 2011, Americans produced around 250 million tons of waste, 32 million tons of that solid waste was plastic. That’s 4.4 pounds of waste per person per day! It’s up to you to help keep plastic bags and other waste out of landfills. (http://www.epa.gov/epawaste/nonhaz/municipal/pubs/MSWcharacterization_508_053113_fs.pdf)
There is hope because recycling and composting helped prevent 87 million tons of material from reaching the landfills that year. That gives us an average of about 1.53 pounds of recycled and composted waste out of our 4.4 pounds per person per day. About 11 percent of the recycled waste from the overall count was the category of plastics that include plastic bags. Unfortunately, only 8 percent of the total plastic waste generated was recycled in 2011. We can change this. There are more than 1,800 businesses in the U.S. that handle or reclaim post-consumer plastics. Put simply, bring your used plastic bags to the grocery store when you shop and drop them at the bag recycle bin. If your store doesn’t have a recycle service for plastic bags, ask the store manager why not or what the alternatives are. You can also find a curbside drop off. http://www.epa.gov/osw/conserve/materials/plastics.htm.
Where does recycled plastic go? You handle it all the time and probably don’t realize it. Products include bottles, carpet, textiles, paper coating and even clothes.
Don’t let your bags end up here. It’s an eyesore for your community, dangerous for the animals in your environment and doesn’t contribute to the reduction of source materials needed for plastic manufacturing.
What’s the bottom line? Recycle your plastic bags, it’s easy. Why? It helps keep trash off the streets. It helps reduce the need for raw resources in manufacturing and it reduces the amount of waste that goes to the landfill; it even helps generate power. Did you know that you can save enough energy to power your laptop for 3.4 hours by recycling 10 plastic bags? You can find these fun facts and other great information here: http://www2.epa.gov/recycle.
Shannon Bond is a multimedia production specialist with EPA Region 7’s Office of Public Affairs. He has served in a host of roles including military policeman, corrections officer, network operations specialist, photojournalist, broadcast specialist and public affairs superintendant.
We’ve had several opportunities to refine GeoGit workflows in real-world situations, but among the most fulfilling was assisting with the response to Typhoon Yolanda (also known as Typhoon Haiyan) in the Philippines. It was the strongest cyclone to make landfall in recorded history, resulting in an urgent need to share data about the damage to help with recovery and reconstruction.
To meet this need, the Global Facility for Disaster Reduction and Recovery (GFDRR) teamed up with the American Red Cross and the Humanitarian OpenStreetMap Team (HOT) and launched an open data platform to gather and share data about Yolanda. The ROGUE project, which helps develop GeoGit, was asked to help manage and distribute extracts of OpenStreetMap data. As described below, we created a powerful bidirectional workflow with OpenStreetMap that enabled us not only to derive and publish up-to-date data for response and recovery efforts but also to contribute back to OpenStreetMap.
Thanks largely to HOT’s efforts, a large number of damaged and destroyed buildings were mapped into OpenStreetMap using commercial satellite imagery distributed under the Next View license or the State Department’s Imagery to the Crowd program. GeoGit was used to extract data from OpenStreetMap and transform it into formats more useful to traditional GIS applications.
While GeoGit supports reading and writing from OpenStreetMap data in a variety of ways, the Yolanda efforts started with the daily
.pbf downloads from geofabrik that were then imported into a GeoGit repository using the
geogit osm import command. This initial import command brings the data into the standard node and way layers in a GeoGit repository with all of the OpenStreetMap tags attached to each feature. During the initial few imports we were able to find and solve some performance bottlenecks that reduced the import time from over an hour to just a few minutes.
Once imported, the
geogit osm map command was used to map the data into more traditional sets of layers, using the tags as attributes. A JSON mapping file specifies which tags were used to separate out the features into layers and assign attributes to each feature. The key mapping involved taking nodes and ways tagged with
typhoon:damage=yes and translating those into
damage_line layers with associated attributes. Over the course of mapping the data, we were able to make improvements to the codebase and workflow in several areas.
Once the repository had the data organized into the right schema, we used the
geogit export pg command to load snapshots into a PostGIS database and serve them to the web. Since we wanted to provide the most current data, we used the
geogit osm apply-diff command to update the repository with daily updates from OSM planet. This ensured that our repository always reflected recent edits and that layers were exported and updated on the site.
In addition to staying in sync with the global OpenStreetMap planet, GeoGit made it possible to change layers in our repository and apply them back to OpenStreetMap — enabling a fully round-trip or bidirectional workflow. For example, we found many misspellings or inconsistent use of tags in the data where able to correct them. We fixed these issues against our PostGIS snapshot, applied the changes back to the repository, generated a changeset using the
geogit osm create-changeset command, and finally uploaded the changeset using JOSM. In the process, we were once again able to improve these functions based on real-world usage.
These tools enable a powerful bidirectional workflow with OpenStreetMap. We demonstrated that data can be imported from OpenStreetMap into a local repository, mapped into a set of layers with a well-defined schema, and served via OGC services. Repositories can be kept in sync with OpenStreetMap over time and, if changes are made to the local repository, GeoGit enables us to produce changesets that can be contributed to the global OSM dataset. Using this same workflow, it becomes possible for users to effectively work with a local extract of OSM data for both making and applying local edits as well as incorporating upstream changes.
Jeff Johnson will present more about GeoGit-based OpenStreetMap import workflows on April 13th at State of the Map US.
The post Recovering from Yolanda with help from OpenStreetMap and GeoGit appeared first on Boundless.
We received an email today from Michael Cliverton with a problem he was experiencing within QGIS Desktop concerning projections. Don’t ask me why, but I enjoy solving projection issues while others scream at their computers in tongues. The screaming is occasionally laced with profanities, but I digress. Michael did nothing of the sort. He actually found an issue we have been seeing pop up and even discussed it briefly in our Introduction to QGIS workshop as a potential gotcha to look out for. I’ll go over the problem and provide a few workarounds.
2. Saving layers as new files with defined projection:
3. Reproject the shapefiles into the imagery’s CRS:
Hope this helps others and happy (Q)GISing!
If there is a service you rely upon from the National Atlas of the United States, you may only have until September to take advantage of it. Starting on September 30, 2014, select services from the National Atlas will be merged into the National Map. The USGS recently released information about the merge, stating: During this year, National […]
The post National Atlas of the United States and the National Map Merging appeared first on GIS Lounge.
Most imagery for use in GIS projects consist of satellite images or aerial photographs but it can also include, thermal images, digital elevation models (DEMs), scanned maps and land classification maps. This article examines imagery and how to effectively gather, store, process and interpret it for a variety of different GIS projects.
File based geodatabases….love them or hate them. I’ve done both. Back in 2009 I even did a two hour workshop on Switching to File Based Geodatabases and why this was the best thing ever.
A question came up on the QGIS users list and it led to a slight detour as the days usually go. What do you do if you get a file based geodatabase and you are using QGIS? At about 9.x everyone was usually screwed. Then ESRI relased their API and the good people at GDAL fixed it to where life was much easier. You could break out ogr and get to your data out.
I’ve been in the middle of writing the QGIS Part Deaux class. It’s going to be heavy editing/working with your data. With the emphasis on your data. It’s going to be the nice thing by the end of the three day class (Parts I, II, and III) is that you aren’t going to care what software you use – you are only going to care ultimately about your data and the tools you can use on it. The tool we will discuss will be QGIS in the class (obviously).
So what happens when you are sitting there and you get a file based geodatabase from someone. I’ve used linux and windows to pull data out into shapefiles but this example ran from the windows side of life and the OSGEO4W installer as described here. Being a complete slacker I didn’t try the stand alone installer but I did get anecdotal evidence it works also. Well – because of GDAL you can open a file based geodatabase as a directory once you get the gdal-filegdb library installed. You can open your data. I even edited the data (but that scared the crap out of me).
In all of that I did something completely out of the ordinary – or at least I thought it was. It actually is quite brilliant that QGIS can do this. I’ve talked about spatialite and with the OGC’s announcement of the geopackage I have a bit more hope with it as an alternative software. So after opening the file based geodatabase in QGIS I cracked open spatialite and made a new database (there are several ways to do it) and used the database manager to pull data out of the File based geodatabase and into spatialite.
Why is that remotely brilliant? When I die I’m going to have the words descriptio put on my tombstone. For those who have dealt with moving data from a database into a shapefile that’s always how I describe the truncation that occurs. It is a pain if you are working with data and you move something into a shapefile and all the fields become truncated. Well with the database manager that comes with QGIS you can move it straight from the file based geodatabase into spatialite (or PostGIS). Yes this is awesome – BUT – you can’t go back to a file based geodatabase….BUT you can hand of the spatialite database to your ESRI users.
So a diversion out of my day. I learned something. Hopefully you did too….QGIS – it’s awesome. It’s OpenSource. (shameless plug WE TEACH CLASSES ABOUT IT!!)
At the National Rural Electric Coop Assn (NRECA) TechAdvantage Conference in Nashville, Nathan Frisby, GIS Engineer at Big Sandy Rural Electric Coop (RECC), and David Herron of Leidos gave a presentation about their experience implementing a GIS at Big Sandy.that I think would be of interest to any utility that has data quality issues, in other words, most utilities. Their experience was particularly notable because Big Sandy is a very small utility with about 13,200 customers and commensurately limited IT resources, but they managed to get the data quality part right. I've blogged on many occasions about the challenge of data quality of location information at utilities. It is a major challenge, and one that the move to smart grid is forcing all utilities to address. Big Sandy focused not only on ensuring that the location and other data they captured was accurate and up to date, but they implemented business practices to ensure that the high level of data quality is maintained.
Big Sandy RECC provides electric power services for 13,200 members in an 8 county territory in Kentucky that is too rural to be of interest to the investor owned utilities. The people in the area got together and formed a coop in 1940.
Collecting location data
The first step in the process they went through was a GPS inventory of everything in the field including meters, poles, transformers, lights, fuses, and so on. As part of this effort they physically tagged all their poles with a unique serial number. When they inventoried poles they captured everything on them including transformers, conductors, insulators, cross arms and equipment from other utilities such as the local phone company (joint use). But they went one step further and noted any problems such as broken cross arms, cracked insulators, or rotten poles. This amounted to a full inventory plus a full field inspection. One of the reasons this was so successful is that the folks that conducted the inventory were a dedicated team whose sole responsibility was the GPS inventory.
Another thing they did at the same time that was also critical to down the road was linking customers addresses to meters and meters to transformers. As I have blogged about, linking customers' addresses to meters is a challenge for many utilities, but it is essential for many critical applications including outage management.
Maintaining data quality
Their data quality maintenance program is basically a double check system. The field worker, typically a linesman, records the changes he/she makes on the work order which is returned to the records team. In addition once a month, a supervisor takes all the completed work orders and goes out in the field and verifies the changes recorded on the work order, looking carefully for any deviations between what he finds in the field and what is recorded on the work order. Only after that is the asset database updated.
Some of the significant benefits that Big Sandy has realized by implementing a GIS with ensured location data quality include,
Perhaps the most telling benefit is how it has improved their ability to report after a disaster to FEMA. For example, after an ice storm in 2012 that fortuitously had hit an area they had GPS inventoried some time prior to the storm, it helped them report accurately their claims for damages to FEMA. When the FEMA inspector visited them to verify their claims, she randomly selected 12 poles and then asked to go out in the field to visit those 12 poles to verify that what was recorded on the work orders corresponded to the work that had actually been done on the assets. After visiting three poles and comparing the very detailed information that Big Sandy had recorded, she was so impressed with the accuracy and detail, she said that this was the most detailed and reliable data she had seen and she didn't need to visit any more poles. Very impressive!
I would also add that the high quality of their spatial data means they are well prepared for smart grid.
Today I got sent a file by a colleague in OSM format. I’d never come across the format before, but I did a quick check and found that OGR could read it (like pretty much every vector GIS format under the sun). So, I ran a quick OGR command:
ogr2ogr -f "ESRI Shapefile" Villages.shp Villages.osm
and imported the results into QGIS:
Oh dear. The data that I was given was meant to be polygons covering village areas in India, but when I imported it I just got all of the vertices of the polygons. I looked around for a while for the best way to convert this in QGIS, but I gave up when I found that the attribute table didn’t seem to have any information showing in which order the nodes should be joined to create polygons (without that information the number of possible polygons is huge, and nothing automated will be able to do it).
Luckily, when I opened the OSM file in a text editor I found that it was XML- and fairly sensible XML at that. Basically the format was this:
<?xml version='1.0' encoding='UTF-8'?> <osm version='0.6' upload='true' generator='JOSM'> <node id='-66815' action='modify' visible='true' lat='17.13506710612288' lon='78.42996272928455' /> <node id='-67100' action='modify' visible='true' lat='17.162222079689492' lon='78.68737797470271' /> <node id='-69270' action='modify' visible='true' lat='17.207542172488647' lon='78.71433675626031' /> <way id='-69328' action='modify' timestamp='2013-10-10T16:16:56Z' visible='true'> <nd ref='-67100' /> <nd ref='-66815' /> <nd ref='-69270' /> <nd ref='-67100' /> <tag k='area' v='yes' /> <tag k='landuse' v='residential' /> <tag k='place' v='Thummaloor main village' /> </way> </osm>
Under a main <osm> tag, there seemed to be a number of <node>’s, each of which had a latitude and longitude value, and then a number of <way>’s, each of which defined a polygon by referencing the nodes through their ID, and then adding a few tags with useful information. Great, I thought, I can write some code to process this!
So, that’s what I did, and for those of you who don’t want to see the full explanation, the code is available here.
I used Python, with the ElementTree built-in library for parsing the XML, plus the Shapely and Fiona libraries for dealing with the polygon geometry and writing the Shapefile respectively. The code is fairly self-explanatory, and is shown below, but basically accomplishes the following tasks:
After all of that, we get this lovely output (overlain with the points shown above):
The code definitely isn’t wonderful, but it does the job (and is relatively well commented, so you should be able to modify it to fit your needs). It is below: