Planet Geospatial

LiDAR News50 Years of 3D

Turns out Dassault has been promoting the advantages of 3D for nearly 50 years - incredible. Continue reading →

Click Title to Continue Reading...

The Map Guy(de)Bootstrap map viewer templates

So this past week, I attended another hackathon and our hack was yet-another-twitter-bootstrap-with-openlayers concoction. The problem it seems, every time I go down this path, I lose several critical hours re-creating my desired responsive layout of a map viewer with sidebar with bootstrap CSS every time. You'd think that having done this many times now, I'd have memorised the necessary HTML markup and CSS to do this by now.

But that is just not the case, so after the event (and a good long post-hackathon sleep), I fired up Sublime Text and set out to solve this problem once and for all: A set of starter bootstrap templates that should serve a useful foundation to build a bootstrap-based map viewer web application on top of. Several hours later, there's two templates available.

A 2-column template. A full-screen map viewer with a sidebar to the left:


And a 3-column template, which has sidebars to the left and right which is reminiscent of the classic MapGuide AJAX viewer.


A key feature of both templates is that the floating sidebars are both collapsible.


On small displays, sidebars start off initially collapsed, but can be brought out through their respective toggle buttons.


You can check out the templates below:


Do you have ideas for improving my initial design and have some web design skills? Send me some pull requests.

thematic mapping blogGeotagging photos using GPS tracks, ExifTool and Picasa

I take a lot of photos while trekking, and most of the time I'm also carrying a GPS with me. As my camera don't have a built-in GPS, my photos are not geotagged while shooting. Luckily, this is an easy task if you've kept your GPS logs from the trip. 

I'm still very happy with my Garmin GPSmap 60CSx that I bought 7 years ago. By changing the setup, the GPS allows me to automatically save the tracks to the memory card. I get one GPX file for each day trekking named with the date. I can easily transfer these tracks to my computer or smartphone with a cable or a card reader. 

Before I converted to Mac, I used GeoSetter to geotag my photos on Windows. Now, I want to do it on the command line using the great ExifTool by Phil Harvey. I installed it on my MacBook using Homebrew:

brew install exiftool

After copying my GPX file to the image folder, I'm simply running:

exiftool -geotag=my.gpx ./

If you forgot to sync the camera and GPS time before your trip, you can use the geosync-option to fix it: (60:00 = 60 minutes):

exiftool -geotag=20140329.gpx -geosync=-60:00 ./

You have a lot of options, so make sure to read the "Geotagging with ExifTool" documentation. ExifTool is modifying the Exif headers of your image files, storing the location data in the same file. 

To see the result on a map, I'm using Picasa.  


Click the map pin button (bottom right) to see the map. If the positions are not shown on the map, try to right-click the image folder and select "Refresh Thumbnails". 

If you don't have a GPS track you can always use Picasa to manually geotag your photos. 

Be aware! I just learnt that social media sites like Facebook, Twitter and Instagram removes the Exif data from your images. Google+ don't. 

Now, how can you display the photos on your own map? It will be the topic of my next blog post. 

ROK Technologies BlogAdvantages of Storing GIS Data in the Cloud

Have heard that ROK has an Instant Enterprise cloud-based GIS solution? This week, I wanted to blog and walk you through the advantages of going Enterprise from the GIS data perspective. If you want to know more, please feel free to contact us.


So, what does a car need in order to operate? Fuel.

What does a GIS organization need? Data.


And why would an organization want to move their GIS data to ROK’s private cloud?


  • Scalability - Maybe you are a one person GIS shop. What happens when you grow to 5, 10, 100 employees? It is no longer realistic to pass around a shapefile or have a few file geodatabases stored on local computers. You need an SDE geodatabase with editing roles, versioning, archiving, and editor tracking. You need multiple users accessing, querying, and editing data from one spot. You need and Enterprise GIS data solution. It is time to reconcile and post.

  • Free your IT - Is your organization tired of walking on eggshells around the ole’ GIS Server in the closet while IT swears on every crash that this is the “last” raising of the dead? Has your organization considered moving to ArcSDE but lacks the core competencies that it takes to set up the geodatabase of your dreams?

  • Redundant backups, disaster recovery, multiple data centers - The cloud is a safe and secure place for your GIS data.

  • Direct edits to and from the internet - Make data edits using ArcGIS for Desktop and see them served out in real time to all of your GIS web applications. Make data edits through ArcGIS Online or your custom web app and see them return in real time to your SDE Geodatabase.

  • Anywhere access - Do I need to explain this? If you have internet, then you have data. At work, home, or on vacation…. But only if you want on vacation. Oh yea… and operating system is of no concern.

  • Building something better - Data is fuel, fuel powers the enterprise, the enterprise is in the cloud.


Allow your organization to focus on missions and goals. Start your Enterprise GIS today.

thematic mapping blogGeotagging and Picasa Web Albums API, or was it Google+ Photos?

In my last blog post, I presented a new plugin, Leaflet.Photo, that allows you to display geotagged photos from any source. Among them was Google+ Photos and Picasa Web Albums API. My plan is to use this API for my travel map, and this is why.

Does Picasa Web Albums still exist? 
It's a bit messy these days. Google is trying to transition from Picasa Web Albums to Google+ Photos, as photos are the number one things that people want to share on social networks. When you use Picasa to share your albums (Sync to Web), the album URL is now on your Google+ profile, and not on Picasa Web Albums (which is just redirecting me to Google+). This is the URL to the public album from my trip to the Trollfjord:

https://plus.google.com/photos/+BjørnSandvik/albums/6052628080819524545

It also works with your Google+ user id:

https://plus.google.com/photos/118196887774002693676/albums/6052628080819524545

My public Google+ web album. The album contains both photos and videos. 

The thing is, there is no Google+ API for photos and videos yet (apparently they were working on it back in 2011). But the Google Web Albums API still works on your Google+ albums.

Google Web Albums API is not the easiest API I've worked with, but it's flexible and quite fast. This is an XML feed of my public album from Trollfjord:

https://picasaweb.google.com/data/feed/api/user/118196887774002693676/albumid/6052628080819524545

The user number and album id is the same as above. Or better for your JavaScript apps, a JSON feed:

https://picasaweb.google.com/data/feed/api/user/118196887774002693676/albumid/6052628080819524545?alt=json

And if you're still using JSONP:

https://picasaweb.google.com/data/feed/api/user/118196887774002693676/albumid/6052628080819524545?alt=json-in-script

If you click on any of these links, you'll see that it's not a very compact format. There is a lot of data that you don't need. Although complicated, you can select the fields you want to include in the feed. This is how I selected the following elements:
  • Photo URL: entry/media:group/media:content
  • Photo caption: entry/media:group/media:description
  • Photo thumbnail URL: entry/media:group/media:thumbnail
  • Photo timestamp: entry/gphoto:timestamp
  • Photo location: entry/georss:where

This is the new URL:

https://picasaweb.google.com/data/feed/api/user/118196887774002693676/albumid/6052628080819524545?alt=json&fields=entry/media:group/media:content,entry/media:group/media:description,entry/media:group/media:thumbnail,entry/gphoto:timestamp,entry/georss:where

While researching, I also learnt that I could use the imgmax attribute to specify the size of the photos referenced in the photo URL. Neat!

So why should I use this (relatively) old API?
Compared to other popular social media sites, Google don't strip off the meta information of your photos. Instead it uses the build in support for image metadata extensively. Hopefully Google will continue to do this, although social media sites have reasons not doing so.

This means that Google don't lock you in. I can change the location of my photos using my GPS tracks, and it's reflected where I embed my photos. I can edit the image captions in Picasa and it's stored within the image file, allowing me to write the caption once and use it everywhere.

So what is my album workflow for my travel map. Before starting my journey, I'm creating a new Google+ album. The feed from this album is attached to my map, by simply passing on the album id. While on journey, I use the Google Photos app to add photos to the album, that will automagically show up on the map as well. Back from trip, I can add and edit photos from my digital camera in Picasa and sync them to the web album.

Photos from Google+ shown on my travel map. 

PS! This blog post is not sponsored by Google :-) 

thematic mapping blogShowing geotagged photos on a Leaflet map

Using Instagram for my real-time travel map had too many limitations, so I decided to use Google+ photos or Picasa Web Albums instead. I've create a new plugin, Leaflet.Photo, that allows you to add geotagged photos to your map, from any source.

The plugin plays well with the great Leaflet.markercluster plugin, showing your photos in clusters. To make the plugin more versatile, it doesn't deal with AJAX loading or image presentation except the thumbnails on the map. Use your AJAX loader of choice, and simply pass on an array of photo objects to the plugin. 

The photo objects can include the properties you like, but the following are required:
  • lat: latitude of photo
  • lng: longitude og photo
  • thumbnail: url to thumbnail photo
I've kept the squared thumbnails of Instagram, as I think it look nicer than variable size thumbnails. Since the photos can have any dimensions, I'm using a CSS technique to crop and center the thumbnails. 

I've created three examples using Picasa Web Albums Data API, CartoDB (synced with Instagram) and Norvegiana API. With CartoDB you can easily get the required photo properties by manipulating the SQL query. Other APIs will require some data parsing before you pass on the photo objects to the plugin. All examples show the photos in a popup when you click/tap on them, but do whatever you like! On my travel map (click on "Bilder"), I'm using my own lightbox showing all photos in a cluster.

Google+ photos

See example | Code

Photos and videos from Google+. See the small animated GIF video thumbnails. 


Instagram / CartoDB

See example | Code

Instagram photos synced with CartoDB. 


Norvegiana 

See example | Code

Historic photos of Oslo from Norvegiana API. 

Enjoy! :-)

BoundlessOpenLayers 3.0 Released!

 
OpenLayers 3 has been a long time in the making but we’re proud to announce that the project is ready for release! This release includes a ton of new features such as an easier to use API as well as improved vector rendering and image manipulation.

While we’ve confidently supported OpenLayers 3 in production since the OpenGeo Suite 4.0 release and have long offered professional services for developers looking to build web applications, we hope this announcement marks a significant step forward in adoption of the library.
Openlayers.org

Check out some of our previous blog posts to learn more about using OpenLayers 3:

And don’t forget to check out the project website for some great documentation and examples.

OL3 vector rendering

The post OpenLayers 3.0 Released! appeared first on Boundless.

AnyGeoMake Tweets not War – NATO Canada Schools Mr. Putin on Russia / Ukraine Boundary Map

Tweet LOL, I love this one… it seems NATO Canada via Twitter was trying to set Mr. Putin and all of Russia straight by giving them a bit of a geography lesson! Using Twitter as the platform of war (make … Continue reading

thematic mapping blogMaking a real time travel map

I had to quit my trip form Oslo to Bergen already on day three - and I have to wait until August 2015 for a second try. I still got time to gain some experience in real time tracking - and mapping. Based on this experience I've made a new version of my live travel map: turban.no



This is a private project to learn new skills - where I care more about new standards and less about old browsers. I'm using CSS3 and HTML5 extensively, the the map will probably not show in Internet Explorer < 10, but it should work well on your tablet or smartphone.


My previous map was about 1 MB to load on my mobile, as I really took off mixing Leaflet, Highcharts, Ext JS, jQuery and Fancybox. I'm now left with only Leaflet and D3.js and only 72 kB gzipped JavaScript. It was a bit more work to create an elevation chart width D3.js, but it's very flexible when you get the grasp of it. I also used D3.js to create a lightbox gallery to show my Intagram photos, as it can easily replace jQuery for selectors and animations.

This is a single page application running in your browser with a CartoDB-backend. The only thing I've changed on my server is the .htaccess file to point all request to the same index.html file. Then I'm using the HTML5 History API to create nice looking URLs to different trips. I've also extended the application to support different users, but I have no plans to create a public web service.

The full application code is not available, but the different bits and pieces are and will. It's still work in progress. The next steps will be to improve the experience on touch screens, add a 3D display and maybe create a mobile app with PhoneGap.

I want to share some experiences I had when creating this map - and I would very much like your feedback!

When you visit the site, you can select between different trips. I'm creating new trips by simply adding new rows to a CartoDB-table. The track and images for each trip are fetched based on time attributes.


You can also link directly to a trip, like: http://turban.no/bjorn/oslo-gaastjern

You can mouseover or click the track to se place names and altitudes. To improve the performance, I'm only drawing the line and not the individual points. To find the nearest point to a mouse/tap position, I'm doing a nearest neighbour search.


Actually I'm drawing two track lines; the stippled line, and a thicker invisible line to make it easier to catch mouseover and click events, especially on touch devices. This is the line shown with less transparency:


The image above show the popup, with terrain type and a weather forecast for this specific location at the time I was there. The track interactions are also linked to the elevation chart:


If you mouseover the track, the same position will show on the elevation chart, and vice versa. Both the track and the chart show the live position with a pulsing marker. I'm also marking the overnight stays, as my SPOT device allows me to send custom messages. The elevation chart reads right to left, as this was the direction of my trip. The direction can be changed for each trip.

If you navigate around in the map, you'll see that the elevation chart is changing to reflect the view:


This is done using Crossfilter to quickly select the points within the map view, although my iPad gets a bit sluggish with instant updates while dragging.

Instagram photos are displayed on the map using the great Leaflet.markercluster plugin:



The photos are shown in a lightbox where you can click/tap through the photos in a cluster (no swipe support yet):



All elements are responsive and should adapt to different screen sizes. I've also made a build process with Grunt to concatenate and compress all the CSS and JavaScript into single files. LESS are used to get rid of all the browser prefixes in CSS. I also made a custom build of D3.js to only include the bits I used, reducing the size to one third.

Continuing the work when there are new trips coming up!

All Points BlogGIS Health News Weekly: SF Earthquake Impacts Sleep, Registries of Medically Vulnerable Citizens

Fitness Trackers Identify Sleep Disruption During SF Earthquake RE Sieber ‏@re_sieber  VGI & the quantified self: Fitness tracker UP shows spatial distribution of sleep disruptions near the earthquake The blog post from Jawbone, maker if a fitness device shows that more people... Continue reading

All Points BlogAnd Now Google Does Drones - Project Wing

Google, too, is experiementing with drones. From the Google X team comes "Project Wing." Several media outlets reported today Google's experimentation with package delivery using their own drone design. According to the Wall Street Journal, Google said a 5-foot-wide single-wing... Continue reading

LiDAR NewsDigitally Documenting the Lincoln Memorial

DJS donated its time and resources to gather millions of data points in order to capture accurate, reliable measurements of the monument, both interior and exterior. Continue reading →

Click Title to Continue Reading...

AnyGeoASPRS UAS Symposium Reno Looks At UAV, UAS, Drone Technology

TweetYou may have noticed the topic of UAVs and UAS technology making loads of news lately. If UAS technology interests you then take note of an interesting event from ASPRS coming up this fall in Reno, Nevada – The UAS … Continue reading

Between the PolesSmall utilities turning to "smart grid as a service"

Globally, smart grid technology has emerged to help utilities deal with challenges such as increased reliability, the need to reduce non-technical losses, distributed renewable generation, and electric vehicles, but for small to medium utilities, access to IT resources limits their ability to implement smart grid solutions.   Back in 2010 McKinsey was already seeing AMI vendors starting to look at options for providing AMI services using a "software as a service" model.  Now power industry IT vendors and service providers are increasingly offering managed services solutions, referred to as smart grid as a service (SGaaS).

I blogged previously about Burlington Hydro, a small utility in an affluent part of southern Ontario that is integrating into an intelligent network many aspects of what is typically included in smart grid including intelligent network devices, self healing networks, smart meters, distributed generation, electric vehicles (EV) , factory ride-through systems (enables factories to continue functioning through outages), battery-based electric storage, bidirectional communications network linking the intelligent devices to the control center, and dramatically increased volumes of real-time data.  Burlington Hydro has been working with a local IT consulting company AGSI to develop systems to manage their smart grid deployment in a real-time, big data IT environment.  But what about small utilities who don't have the in-house skills or the revenue stream to support bringing in an outside IT consulting company ?

Back in 2012 at an annual Geospatial Information and Technology (GITA) Pacific Northwest Conference, Terraspatial Technologies gave a presentation on a hosted or SaaS (software as a service) solution for small utilities.

Hosted solutionPlantWorx Cloud

What struck me as as so unique about what Terraspatial offers and which is so valuable to small utilities is that it is a hosted solution. Basically, all the utility needs to install at its site is a browser, everything else is running in the cloud.  The most important benefit of a hosted solution like this is that it has the potential to provide a high level of IT security without the need to increase the level of IT capacity that the utility needs to maintain in house.

Terraspatial's hosted solution is called PlantWorx for electric power utilities. The design goals of the solution that Terraspatial developed are very relevant to small utilities. 

  • Hosted, which means that the utility does noPlantWorxt need to own or manage servers or software. 
  • Secure because it relies on the security of a major cloud hosting provider such as Rackspace or Amazon that can provide a level of security, including protection from internal tampering, role-based access by users, protection from external  threats, the latest encryption, redundancy and back-ups, ISO certified data centers, and mirrored servers for persistent backup, in other words a much higher level of security than the average utility network is capable of.
  • Accessible from the office and the field
  • Integrated solution that supports staking through to accounting and reporting with interfaces to CAD, GIS, customer information systems, accounting and billing systems, materials management, and other systems

Now according to Navigant Research the growth in cloud-based services has increased the awareness of SGaaS. Offerings are available for a host of smart grid applications in several categories, including home energy management (HEM), advanced metering infrastructure (AMI), distribution and substation automation (DA and SA) communications, asset management and condition monitoring (AMCM), demand response (DR), and software solutions and analytics. 

The complexity of smart grid deployments, systems integration, spatial analytics, real-time big data and cyber security and limited internal IT capacity are some of the drivers behind a growing market for SGaaS. Navigant Research forecasts that the global SGaaS market will grow from just under $1.7 billion in 2014 to more than $11.1 billion in 2023.

thematic mapping blogLive tracking in Lofoten and Vesterålen

Last weekend, I had a great trip to scenic Lofoten and Vesterålen in Northern Norway. I brought my tracking gear to test my new real time travel map. How did it go?

Our first trip was to Trollfjord, a 2 km long fjord with a narrow entrance and steep-sided mountains. It's a famous tourist spot in the Lofoten archipelago, but not many leave the boat at the fjord's end to hike up to the Trollfjord hut.

The small Trollfjord hut.

Trollfjord goes in an east-west direction, and I expected to be in the "satellite shadow" being far north and having steep mountains blocking the sky towards the south. My good old Garmin GPSmap 60CSx did well in the rugged landscape, while my satellite SPOT messenger had some difficulties finding and sending positions. 

Live track from my SPOT messenger (interactive map).

GPS track from my Garmin GPS.

The great thing of using CartoDB to sync my SPOT-data, is that you can edit your positions with ease after the trip. 

Tip! The default basemaps in CartoDB are not very detailed for Norwegian mountains, but you can easily add a basemap (Topo2) from the Norwegian Mapping Authority ("Kartverket") with this URL: 

http://opencache.statkart.no/gatekeeper/gk/gk.open_gmaps?layers=topo2&zoom={z}&x={x}&y={y}

Changing the basemap of CartoDB.  


It's then easy to edit or delete the wrong positions:


Be aware! I was struggling editing my CartoDB-tables from my smartphone, but it was not possible to edit the content of table cells. Hopefully, the CartoDB team will make their editor more mobile friendly in the future. 


Another issue was to get the correct time and position of Instagram photos on the map. Trollfjord is an area with poor mobile coverage. When I took photos with the Instagram app it was struggling placing the photos on the map. It worked better if I took the photos the the built-in camera app of my phone (with geotagging activated) and then posting the photo with the Instagram app. 


If I didn't have mobile coverage, I would just retry posting the photo when back to civilisation. The time associated with the image is when it was sent and not taken. I'm going to check if I can extract the shooting time from the Exif headers of the image.

Our second trip went through Møysalen national park, one of very few national parks in Norway that goes all the way down to sea level. 

Møysalen national park

Here we went in a south-north direction, and my SPOT messenger did better as there was less mountains blocking the satellites. 

Map and elevation profile of a 2-day hike through Møysalen national park (interactive map).  

The web service from the Norwegian Mapping Authority ("Kartverket") seemed to have some technical troubles this weekend, so the altitude values and place names was not updated instantly. When the web service was failing my script stopped and the weather report from yr.no was not fetched either. I'm going to improve the error handling before my next trip. 

I also took a lot of photos with my compact camera while trekking, and I would like to show these on the map as well. My camera don't have a GPS receiver, but I should be able to geotag my photos by using my GPS track. It will be the topic of my next blog post. In the meanwhile, here are some of the photos: 

Hurtigruta in Trollfjord

Trollfjord by night

Cloudberries

Trollfjordtindan.

Seagull

Seagulls

Sea eagle in Raftsundet.

Durmålstindan

Tverrelvtindan

Cold and fresh bath at Snytindhytta.

All Points BlogGIS Education News Weekly: USC Certificates, CyberGIS, Stanford Climate Change Story Map

New USC Certificates The Spatial Sciences Institute housed at USC Dornsife has added two new certificate programs. The Graduate Certificate in Geospatial Leadership and the Graduate Certificate in Geospatial Intelligence offer GIST professionals additional expertise in their... Continue reading

Geoinformatics TutorialFixing a Map Projection Issue for the Next SAR Toolbox

In the Nest Sar Toolbox you have a choice of coordinate systems:

For some of these, however, Nest reports the error "Axis Too Complex".

I compared a map projection that works with one where “Axis too complex” error is reported. Map projections that work contain these lines:
 
AXIS["Easting", EAST],
AXIS["Northing", NORTH]

While the non-working ones contain these lines, which apparently are wrong or unprocessable by NEST
  AXIS["Easting", "North along 90 deg East"], 
AXIS["Northing", "North along 0 deg"],



If you use the graph builder or command line (gpt) where the parametres are defined in an XML-file, you can manually replace these lines in the graph file (xml file) for the projection definition, then it works! An example for such an XML-file used by NEST is here.

Another example, this one works

 PROJCS["WGS 84 / Australian Antarctic Lambert", 
  GEOGCS["WGS 84", 
    DATUM["World Geodetic System 1984", 
      SPHEROID["WGS 84", 6378137.0, 298.257223563, AUTHORITY["EPSG","7030"]], 
      AUTHORITY["EPSG","6326"]], 
    PRIMEM["Greenwich", 0.0, AUTHORITY["EPSG","8901"]], 
    UNIT["degree", 0.017453292519943295], 
    AXIS["Geodetic longitude", EAST], 
    AXIS["Geodetic latitude", NORTH], 
    AUTHORITY["EPSG","4326"]], 
  PROJECTION["Lambert_Conformal_Conic_2SP", AUTHORITY["EPSG","9802"]], 
  PARAMETER["central_meridian", 70.0], 
  PARAMETER["latitude_of_origin", -50.0], 
  PARAMETER["standard_parallel_1", -68.5], 
  PARAMETER["false_easting", 6000000.0], 
  PARAMETER["false_northing", 6000000.0], 
  PARAMETER["scale_factor", 1.0], 
  PARAMETER["standard_parallel_2", -74.5], 
  UNIT["m", 1.0], 
  AXIS["Easting", EAST], 
  AXIS["Northing", NORTH], 
  AUTHORITY["EPSG","3033"]]


EPSG3031 works not: 
  PROJCS["WGS 84 / Antarctic Polar Stereographic", 
  GEOGCS["WGS 84", 
    DATUM["World Geodetic System 1984", 
      SPHEROID["WGS 84", 6378137.0, 298.257223563, AUTHORITY["EPSG","7030"]], 
      AUTHORITY["EPSG","6326"]], 
    PRIMEM["Greenwich", 0.0, AUTHORITY["EPSG","8901"]], 
    UNIT["degree", 0.017453292519943295], 
    AXIS["Geodetic longitude", EAST], 
    AXIS["Geodetic latitude", NORTH], 
    AUTHORITY["EPSG","4326"]], 
  PROJECTION["Polar Stereographic (variant B)", AUTHORITY["EPSG","9829"]], 
  PARAMETER["central_meridian", 0.0], 
  PARAMETER["Standard_Parallel_1", -71.0], 
  PARAMETER["false_easting", 0.0], 
  PARAMETER["false_northing", 0.0], 
  UNIT["m", 1.0], 
  AXIS["Easting", "North along 90 deg East"], 
  AXIS["Northing", "North along 0 deg"], 
  AUTHORITY["EPSG","3031"]]

Geoinformatics TutorialTerrain Correction of SAR Images -- Part 1

A characteristic of side-looking SAR image is the so-called foreshortening and layover, a reflected signal from a mountaintop reaches the sensor earlier or at the same time as the signal at the foot of the mountain. This results in the typical look of mountains that seem to have "fallen over" towards the sensor:


In the original image to the left, a pixel is basically displaced depending on its elevation above sea-level, so it is important to remove this layover as seen in the image above to the right. The freely available NEST SAR Toolbox is in many ways a great tool for satellite image processing and makes it very easy terrain-correct SAR images in a fully automatic process.

The algorithm takes the DEM and using orbit parameters of the satellite creates a simulated SAR image from this DEM. The simulated and the real SAR image, which will look very similiar, are coregistered. Through this simulation, the displacement for each location in the original landscape, the DEM, is known, so if the simulated SAR image is transformed back into the original DEM -- and the coregistered SAR image along with it -- the pixels of the SAR image will receive their real, geographical location. (It's actually quite easy in principle, but not sure this description is clear...)

Below is the original ESA SAR image as loaded into NEST displaying the typical layover: 


Before the terrain correction, I apply the newest orbit file (Utilities>Apply Orbit), calibrate the values (SAR Tools>Radiometric Correction>Calibrate; but not in dB since the terrain correction needs linear values!) and the run a speckle filter, median 3x3 (SAR Tools>Speckle Filtering>Single Image)

Now to the actual terrain correction. Choose Geometry>Terrain Correction>SAR Simulation Terrain Correction. In the first tab 1-Read you choose the product to be corrected: 


The second tab defines the output. Unfortunately the default output filename in this case is only "SarSimTC.dim", I follow the NEST naming convention where all methods applied are contained in the filename, such as "ORIGINALNAME_AppOrb_Calib_Spk_SarSimTC.dim" but this has to be typed manually:


In the "3-SAR simulation" tab, one can choose various DEM such as GETASSE and SRTM, but in my case I choose an "External DEM" and specify the file path. I set the "no data value" to 9999.0, otherwise all ocean surface will be NAN. There is something unusual in this tab -- if you do not highlight any of the source bands, the first one only will be processed, in this case "Amplitude_VV". If other bands also should be processed, both source bands (in this case Amplitude_VV and Amplitude_VH) must be highlighted by choosing and clicking them!


The "4-GCP Selection" tab I leave the given values: 


Finally, the "5-SARSIM-Terrain-Correction" tab. For my purpose, I choose 20m resolution for the output image, the map projection of Svalbard/Spitsbergen WGS1984 / UTM 33N and prefer nearest neighbour interpolation:

Now I can choose "process" and this particular run takes 6.5 minutes on a Windows 7 64-bit computer for one source band. For two source bands it is much, much longer (80 minutes in this run!) and may not work if the computer has too little RAM, so the best and fast way is to process individual source bands separately. (The Range Doppler algorithm for removing layover which I will discuss in a later  is faster but does not work for some scenes at high latitudes(?)).

 I choose "Utilities>Dataset Conversion>Linear to dB" to get desibel values and get this final result. The data fits perfectly to cartographic shapefiles of the coastlines and to the other geolocations.


compared again to the original SAR image the difference is easily visible!


A problem in the current version: If you -- after having processed a particular scene -- choose a different scene under "1-Read" as input having differently named source bands, the source band list under "3-SAR Simulation" does not update -- you have to close the whole window and start all over -- part 2 and following describe how to process a large number of scenes.

The next postings will discuss how to run all this from command line and do batch processing.

Geoinformatics TutorialTerrain Correction of SAR Images -- Part 2

Rather than clicking through each applied method in the menue, a production chain can be implemented with the "Graph Builder". Choose "Graphs>Graph Builder" and by right clicking in the graph space you can add methods and connect them with arrows in the order of running through these.

You can save the whole graph as an xml-file for later use, and this is also needed for batch command line processing. The individual tabs in the Graph Builder are just the same as described in part 1, only be aware that for the SARSIM Terrain Correction three individual parts have to be chosen (below the numbers 6 through 8). The "SARSIM-Terrain-Correction" you can choose is only part of the similarly named SARSIM Terrain Correction in Part 1!


Geoinformatics TutorialTerrain Correction of SAR Images -- Part 3

The most convenient way to process large quantities of SAR data is using the methods through command line. With the "gpt" command as described in the Nest help pages you can process single scenes from command line, but here is a way to process large quantities of scenes from command line. With the DOS "for" command you recursively search through your directories for scenes to be processed and hand these scenes each to the gpt command.

Here is how to do it:

First you create the processing chain with the graph builder as described in part 2 and save it as an XML file. Especially in the beginning, you may want to keep it to simpler processing chains not containing all tasks at once. In our case, let's take only the SARSIM Terrain Correction:


You set the values in Graph Builder and save it, lets say as "SARSIM_TC.xml". You still should check and edit the XML file for the parameters you need (map projection, resolution, etc) and you will have to modify the saved XML file at twoplaces for batch command line use as follows:

Make sure that the filenames in the XML file have $file and $target as placeholders as in the following examples:


 <node id="1-Read">
    <operator>Read</operator>
    <sources/>
    <parameters class="com.bc.ceres.binding.dom.Xpp3DomElement">
      <file>$file</file>
    </parameters>
  </node>
and the Write part (sourceProduct refid may vary in your case):


<node id="2-Write">
    <operator>Write</operator>
    <sources>
      <sourceProduct refid="6-SARSim-Terrain-Correction"/>
    </sources>
    <parameters class="com.bc.ceres.binding.dom.Xpp3DomElement">
           <formatName>BEAM-DIMAP</formatName>
      <file>$target</file>
    </parameters>
</node>
Now you create a SARSIM_TC.bat file containing


for /r C:\Users\max\location_of_files\ %%X in (*.dim) do (gpt   C:\Users\max\location_of_XMLfile\ SARSIM_TC.xml -Pfile="%%X"  -Tfile=" C:\Users\max\location_of_files\%%~nX_SarSimTC.dim")

What happens here?


  1. The for-command goes through the directory containing your files to find files named "*.dim" and passes the file name to "%%X".
  2. For each of these input files "-Pfile="%%X", the NEST command "gpt" applies the Graph Builder production chain saved in "SARSIMTC_dB.xml" 
  3. The output is saved in the parameter -Tfile, which here is written "%%~nX_SarSimTC.dim", taking the filename and between original name and filetype adding "_SarSimTC" to indicate this is having been processed with SarSim. You may choose different naming, but I find this convenient.
You navigate the DOS window (type "cmd" at Windows Start> "search programs and files" to open it) to the directory containing SARSIM_TC.bat, then type "SARSIM_TC.bat" and all scenes in the specified folder will be processed.

The results will be the same as shown in Part 1



Geoinformatics TutorialTerrain Correction of SAR Images -- Part 4

A short comment only on the "Range Doppler Terrain Correction." As described in Part 1, the SARSIM algorithm takes the DEM and using orbit parameters of the satellite creates a simulated SAR image from this DEM. The simulated and the real SAR image, which will look very similiar, are coregistered. Through this simulation, the displacement for each location in the original landscape, the DEM, is known, so if the simulated SAR image is transformed back into the original DEM -- and the coregistered SAR image along with it -- the pixels of the SAR image will receive their real, geographical location.

The Range Doppler Algorithm does not simulate a SAR image to coregister this and the original SAR image, but calculates displacement based on orbit parameters and a DEM. The Range Doppler algorithm is much faster in processing scenes. When comparing scenes with both SARSIM and Range Doppler methods, I find no difference in the final product. However, the Range Doppler method does not work for quite a few of my scenes. If I understood ESA correctly, this is due to not accurate enough data in the SAR metadata, such that calculations of the displacement is incorrect. This appears to be special with data from the Arctic regions

I haven't therefore used this one that much, but in other areas of the world it may be worth using the Range Doppler Terrain Correction.

Choose Geometry>Terrain Correction>Range Doppler Terrain Correction  (in Graph Builder choose "Terrain Correction"). The settings are as follows


LiDAR NewsMobile Mapping Contest Closes August 31

If you'd like a chance to win $10K plus use of the new Pegasus:Two on a mobile mapping project Continue reading →

Click Title to Continue Reading...

Andrew Zolnai BlogThe happenstance art of maps

I showed recently how CLIWOC weather data from ships captains logs dating 1662 to 1885 totalled almost 1/2M points. It started with a 1/4M ships tracks, and combining look-up tables from four maritime agencies they yield numeric wind force and direction...
But wait! Let's leave trad posters in favour of rad palettes, shall we?
Although not evenly scattered, they created a rather arresting visual effect. I coded the Beaufort (wind strength) readings by colour as well as size - ROYGBIV and large => small from low to high Beaufort or wind strength - posting the smaller weaker wind values over the larger stronger ones, not only reduced symbol overlap & hiding, but it also created a pseudo 3D effect. And orienting them to wind direction helped avoid over-posting.

Here it is with a simple continent mask backdrop:

click to enlarge

Here it is against Rumsey's 1812 world map to match the vintage of the data:

click to enlarge

Geolicious liked my white-on-black static map on AWS, so I tried inverting too:

click to enlarge

So who said maps cannot be art? And does this not remind you of starling flight clouds?

image www.rspb.org.uk


GIS CloudWhy Mobile Data Collection Portal?

For everyone who wants to take their field data collection to the next level, here are some interesting facts about Mobile Data Collection Portal.

BLOG
Creating forms, accessing the form/project in MDC app and making edits within the app from anywhere, anytime on any device

Key values:
  • Free sign in/use your existing username
  • Create and edit as many custom forms and projects as you want
  • Add as many fields as you want like select lists, radio buttons, hidden fields, barcode, latitude, longitude
  • Access the projects and forms with your Mobile Data Collection app
  • Full and real time data display
  • Share your projects with other users (manage your permissions)
  • Create and save reports (with all the data and media collected) to your desktop (hit ctrl+s)

For more ideas on how to utilize MDC and MDC Portal check our Field Data Collection and Inspection Solution

Use cases:

All Points BlogGIS Government Weekly News: GGIM Standards Report, Sarasota Polygons, First Nations Map Their Land

GGIM Standards Report National Mapping Authority Perspective: International Geospatial Standards by Lead Authors Gerardo Esparza, INEGI, Mexico, Steven Ramage, Ordnance Survey International, Great Britain is available as a 26 page PDF download.   Sarasota Capital Projects Map... Continue reading

LiDAR NewsLaser Scanner Provides Quality Control for 3D Printing

Suppose your 3D printer malfunctions in the middle of a project. Continue reading →

Click Title to Continue Reading...

geoMusingsMaryland Council on Open Data

Back in May, I had the honor of being appointed to the newly established Maryland Council on Open Data. The Council had its inaugural meeting in Baltimore yesterday and was heavily attended, including attendance by Governor Martin O’Malley. I’ll discuss his remarks to the group later.

As the first meeting of a new group, it went off largely as I expected. The agenda consisted primarily of an overview of the establishing legislation, a review of ethics requirements, demos of the existing open data portals, discussion of the history of open data in Maryland, and remarks from the Governor.

I won’t go into details about the make-up of the Council, but they can be found here http://msa.maryland.gov/msa/mdmanual/25ind/html/53opendata.html. Nor will I do a deep dive into the legislation, but it can be found here: http://mgaleg.maryland.gov/2014RS/chapters_noln/Ch_69_sb0644T.pdf (PDF). I will instead focus on my take-aways from the meeting itself.

First, the establishing legislation makes data in Maryland open by default, unless it falls under certain criteria (personally identifiable information, law enforcement sensitive, etc.). Second, it is in fact legislation. Previous open data initiatives (MSGIC Executive Committee and the Maryland Open Data Working Group), were established by executive order. As a result, they were vulnerable to reversal by subsequent administrations and they had no real effect on other branches of government. Because of the new legislation, the open data has greater durability and active, enthusiastic participation from the state legislature.

So the new Council unifies the previous efforts and has top cover from the legislation. That can only be a good thing. The state currently has two open data portals: an Esri Open Data portal for geospatial data and a Socrata portal for everything else. In practice, the lines between them may not be so distinct, but that’s the stated role of each.

It is clear that open data is important to the Governor. Since his days as the Mayor of Baltimore, he has been known as a data-driven executive. The “CityStat” concept in Baltimore evolved into “StateStat” when he moved to Annapolis. It is widely known that data and metrics back everything his administration does. Open data is the other side of the coin. It is the mechanism by which the supporting data and metrics of the state government are made public.

In his remarks, the Governor highlighted Maryland’s top ranking (shared by six states) in the Center for Data Innovation report of August 18, 2014 but then quickly addressed the remaining weaknesses identified. Specifically, he discussed:

  • Reporting on spending data
  • The need for a more complete picture of all data sets
  • The need for better minimum metadata standards
  • The need for a uniform standard for giving citizens access to public information

These are all fairly easy to address on the surface. In the open discussion that followed, the theme that I took away was “culture.” Maryland has the policy framework and a good start on a technical framework to more fully open its data. The hard work, as it always is, is transforming the culture. As the requirements for opening data begin to trickle down into the daily lives of the people handling information, I think a lot of workflows will change. I also suspect some of the standards chosen for achieving compliance with open data requirements will facilitate significant changes to technical architectures in individual departments. In short, the proverbial onion will part to get peeled.

It’s a lot of work and it won’t happen quickly. I am excited for the opportunity to participate. I have worked in the private sector for my entire career (although in professional services to the government), so this is my first foray into anything resembling being on the public sector side of things. I expect I’ll have a lot to learn.

My interest in open data has been somewhat spurred by the infrastructure data contraction that occurred in the mid-2000s in reaction to domestic security concerns. I felt it was the wrong direction to go then and I don’t think making data harder to acquire really helped anyone. I’m looking forward to seeing what we can do on the Council to achieve the benefits other states, like Arkansas, have achieved through opening data rather than closing it off.

AnyGeoOpen Government Tour 2014 #OGT14 Hits Victoria BC to Stimulate OpenData Dialog #opengov

TweetA topic that is hugely hot these days, particularly with the GeoTech crowd, is Open Data. I spent an evening in the meeting chambers at our local City Hall discussing OpenData with about 2 dozen other geeks with nothing better … Continue reading

GIS LoungeEsri Wants Your Input on the Esri User Conference Agenda

Esri has posted a survey soliciting input on the annual Esri User Conference agenda.  The survey is short and can be taken in five minutes.  It asks a series of questions about how the agenda is presented (mobile, online, paper, etc.) and accessed. The answers are confidential and the only [...]

The post Esri Wants Your Input on the Esri User Conference Agenda appeared first on GIS Lounge.

LiDAR NewsLiDAR eNewsletter Informs

With many here in the U.S. on holiday this can be a great time to catch up on your reading and please pass us on to a colleague. Enjoy. Continue reading →

Click Title to Continue Reading...

The Map Guy(de)I have a dream

I have a dream

Where MapGuide and FDO source code are hosted and/or mirrored on GitHub.

Where by the virtue of being hosted on GitHub, these repositories are set up to take advantage of every free service available to improve our code quality and developer workflow:
  • TravisCI for Continuous Integration of Linux builds
  • AppVeyor for Continuous Integration of Windows builds
  • CoverityScan for static code analysis
  • Coveralls for code coverage analysis
  • What other awesome services can we hook on here? Please enlighten me. I'd really want to know.
Where a single commit (from svn or git) can start an avalanche of cloud-based services that will immediately tell me in several hours time (because C++ code builds so fast doesn't it?):
  1. If the build is OK (thanks to Travis and AppVeyor)
  2. Where we should look to improve our test coverage (thanks to coveralls)
  3. Areas in our codebase where we should look to change/tweak/refactor (thanks to coverity scan)
  4. Other useful reports and side-effects.
Now the difference between dream and reality is that there are clear obstacles preventing our dream from being realised. Here's some that I've identified.

1. Git presents a radically different workflow than Subversion

Yes, we're still using subversion for MapGuide and FDO (har! har! Welcome to two-thousand-and-late!). Moving to GitHub means not only a change of tools, but a change of developer workflows and mindset.

So in this respect, rather than a full migration, an automated process of mirroring svn trunk/branches (and any commits made to them) to GitHub would be a more practical solution. Any pointers on how to make this an idiot-proof process?

2. Coverage/support is not universal

MapGuide/FDO are multi-platform codebases. Although TravisCI can cover the Ubuntu side and AppVeyor can cover the Windows side, it does leave out CentOS. I've known enough from experience (or plain ignorance) that CentOS and Ubuntu builds need their own separate build/test/validate cycles.

And actually, Travis VMs being 64-bit Ubuntu Linux doesn't help us either. Our ability to leverage Travis would hinge on whether we can either get 64-bit builds working on Linux or am able to cross-compile and run 32-bit MapGuide on 64-bit Ubuntu, something that has not been tried before.

Also most service hooks (like coveralls and CoverityScan) target Travis and not AppVeyor, meaning whatever reports we get back about code quality and test coverage may have a Linux-biased point of view attached to them.

3. The MapGuide and FDO repositories are HUGE!

The repositories of MapGuide and FDO not only contain the source code of MapGuide and FDO respectively, but the source code of every external thirdparty library and component that MapGuide/FDO depends on, and there's a lot of third-party libraries we depend on.

If we transfer/mirror the current svn repositories to GitHub as-is, I'm guessing we'd probably be getting some nice friendly emails from GitHub about why our repos are so big in no time.

Also would Travis and AppVeyor let us get away with such giant clones/checkouts happening every time a build is triggered in response to a commit? I probably don't think so. Then again, I do live in a country where bandwidth doesn't grow on trees and our current government has destroyed our dreams of faster internet. What do I know?



So what do you think? Is this dream something worth pursuing?

sharpgisCreating a simple push service for home automation alerts

In my intro post for my home automation project, I described a part of the app that sends temperature, humidity and power consumption measurements and pushes them to my phone. In this blogpost, we’ll build a simple console-webservice that allows a phone to register itself with the service, and the service can push a message to any registered device using the Windows Notification Service. Even if you’re not building a home automation server, but just need to figure out how to push a message to your phone or tablet, this blogpost is still for you (but you can ignore some of the console webservice bits).

To send a push message via WNS to an app, you need the “phone number of the app”. This is a combination of the app and the device ID. If you know this, and you’re the owner for the app, you are able to push messages to the app on the device. It’s only the app on the device that knows this phone number. If the app wants someone to push a message to the device, it will need go share it with that someone. But for this to work, you will first have to register the app on the Microsoft developer portal and associate your project with the app in order to be able to create a valid “phone number”.

Here’s the website after registering my app “Push_Test_App”. You simply create a new app, and only need to complete step 1 to start using WNS.

image

Next you will need to associate your app with the app you created in the store from the following menu item:

image

Simply follow the step-by-step guide. Note that the option is only available for the active startup-up project. Repeat this for both the store and phone app if you’re creating a universal project (make sure to change the startup project to get the menu item to relate to the correct project).

This is all we need to do to get the app’s “phone number”  the “channel uri”. We can get that using the following line of code:

var channel = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
var uri = channel.Uri;

Notice that the channel.Uri property is just a URL. This is the URL to the WNS service where we can post stuff to. The WNS service will then forward the message you send to your app on the specific device.

Next step is to create a push service. We’ll create two operations: A webservice that an app can send the channel Uri to, and later an operation to push messages to all registered clients.

We’ll first build the simple console app webserver. Some of this is in part based http://codehosting.net/blog/BlogEngine/post/Simple-C-Web-Server.aspx where you can get more details.

The main part to notice is that we’ll start a server on port 8080, and we’ll wait for a call to /registerPush with a POST body containing a json object with channel uri and device id:

using System;
using System.Net;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Json;
using System.Threading;

namespace PushService
{
    internal class HttpWebService
    {
        private HttpListener m_server;
        
        public void Start()
        {
            ThreadPool.QueueUserWorkItem((o) => RunServer());
        }

        private void RunServer()
        {
            Int32 port = 8080;
            m_server = new HttpListener();
            m_server.Prefixes.Add(string.Format("http://*:{0}/", port));
            m_server.Start();
            while (m_server.IsListening)
            {
                HttpListenerContext ctx = m_server.GetContext();
                ThreadPool.QueueUserWorkItem((object c) => ProcessRequest((HttpListenerContext)c), ctx);
            }
        }

        public void Stop()
        {
            if (m_server != null)
            {
                m_server.Stop();
                m_server.Close();
            }
        }

        private void ProcessRequest(HttpListenerContext context)
        {
            switch(context.Request.Url.AbsolutePath)
            {
                case "/registerPush":
                    HandlePushRegistration(context);
                    break;
                default:
                    context.Response.StatusCode = 404; //NOT FOUND
                    break;
            }
            context.Response.OutputStream.Close();
        }

        private void HandlePushRegistration(HttpListenerContext context)
        {
            if (context.Request.HttpMethod == "POST")
            {
                if (context.Request.HasEntityBody)
                {
                    System.IO.Stream body = context.Request.InputStream;
                    System.Text.Encoding encoding = context.Request.ContentEncoding;
                    System.IO.StreamReader reader = new System.IO.StreamReader(body, encoding);
                    DataContractJsonSerializer s = new DataContractJsonSerializer(typeof(RegistrationPacket));
                    var packet = s.ReadObject(reader.BaseStream) as RegistrationPacket;
                    if (packet != null && packet.deviceId != null && !string.IsNullOrWhiteSpace(packet.channelUri))
                    {
                        if (ClientRegistered != null)
                            ClientRegistered(this, packet);
                        context.Response.StatusCode = 200; //OK
                        return;
                    }
                }
            }
            context.Response.StatusCode = 500; //Server Error
        }

        /// <summary>
        /// Fired when a device registers itself
        /// </summary>
        public event EventHandler<RegistrationPacket> ClientRegistered;


        [DataContract]
        public class RegistrationPacket
        {
            [DataMember]
            public string channelUri { get; set; }
            [DataMember]
            public string deviceId { get; set; }
        }
    }
}

Next let’s start this service in the console main app. We’ll listen for clients registering and store them in a dictionary.

 

private static Dictionary<string, Uri> registeredClients = new Dictionary<string, Uri>(); //List of registered devices

static void Main(string[] args)
{
    //Start http service
    var svc = new HttpWebService();
    svc.Start();
    svc.ClientRegistered += svc_ClientRegistered;

    Console.WriteLine("Service started. Press CTRL-Q to quit.");
    while (true)
    {
        var key = Console.ReadKey();
        if (key.Key == ConsoleKey.Q && key.Modifiers == ConsoleModifiers.Control)
            break;
    }
    //shut down
    svc.Stop();
}
        
private static void svc_ClientRegistered(object sender, HttpWebService.RegistrationPacket e)
{
    if(registeredClients.ContainsKey(e.deviceId))
        Console.WriteLine("Client updated: " + e.deviceId);
    else
        Console.WriteLine("Client registered: " + e.deviceId);
    
    registeredClients[e.deviceId] = new Uri(e.channelUri); //store list of registered devices
}

Note: You will need to launch this console app as admin to be able to open the http port.

Next, let’s add some code to our phone/store app that calls the endpoint and sends its channel uri and device id (we won’t really need the device id, but it’s a nice way to identify the clients we have to push to and avoid duplicates). We’ll also add a bit of code to handle receiving a push notification if the server was to send a message back (we’ll get to the latter later):

private async void ButtonRegister_Click(object sender, RoutedEventArgs e)
{
    var channel = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
    var uri = channel.Uri;
    channel.PushNotificationReceived += channel_PushNotificationReceived;
    RegisterWithServer(uri);
}

private async void RegisterWithServer(string uri)
{
    string IP = "192.168.1.17"; //IP address of server. Replace with the ip/servername where your service is running on
    HttpClient client = new HttpClient();

    DataContractJsonSerializer s = new DataContractJsonSerializer(typeof(RegistrationPacket));
    RegistrationPacket packet = new RegistrationPacket()
    {
        channelUri = uri,
        deviceId = GetHardwareId()
    };
    System.IO.MemoryStream ms = new System.IO.MemoryStream();
    s.WriteObject(ms, packet);
    ms.Seek(0, System.IO.SeekOrigin.Begin);
    try
    {
        //Send push channel to server
        var result = await client.PostAsync(new Uri("http://" + IP + ":8080/registerPush"), new StreamContent(ms));
        Status.Text = "Push registration successfull";
    }
    catch(System.Exception ex) {
        Status.Text = "Push registration failed: " + ex.Message;
    }
}

//returns a unique hardware id
private string GetHardwareId()
{
    var token = Windows.System.Profile.HardwareIdentification.GetPackageSpecificToken(null);
    var hardwareId = token.Id;
    var dataReader = Windows.Storage.Streams.DataReader.FromBuffer(hardwareId);

    byte[] bytes = new byte[hardwareId.Length];
    dataReader.ReadBytes(bytes);
    return BitConverter.ToString(bytes);
}

[DataContract]
public class RegistrationPacket
{
    [DataMember]
    public string channelUri { get; set; }
    [DataMember]
    public string deviceId { get; set; }
}

//Called if a push notification is sent while the app is running
private void channel_PushNotificationReceived(PushNotificationChannel sender, PushNotificationReceivedEventArgs args)
{
    var content = args.RawNotification.Content;
    var _ = Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
    {
        Status.Text = "Received: " + content; //Output message to a TextBlock
    });
}

Now if we run this app, we can register the device with you service. When you run both, you should see a “client registered” output in the console. If not, check your IP and firewall settings.

Lastly, we need to perform a push notification. The basics of it is to simply post some content to the url. However the service will need to authenticate itself using OAuth. To authenticate, we need to go back to the dev portal and go to the app we created. Click the “Services” option:

image

Next, go to the subtle link on the following page (this link isn’t very obvious, even though it’s the most important thing on this page):

image

The next page has what you need. Copy the highlighted Package SID + Client Secret on this page. You will need this to authenticate with the WNS service.

image

The following helper class creates an OAuth token using the above secret and client id, as well as provides a method for pushing a message to a channel uri using that token:

using System;
using System.IO;
using System.Net;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Json;
using System.Text;
using System.Web;

namespace PushService
{
    internal static class PushHelper
    {
        public static void Push(Uri uri, OAuthToken accessToken, string message)
        {
            HttpWebRequest request = HttpWebRequest.Create(uri) as HttpWebRequest;
            request.Method = "POST";
            //Change this depending on the type of notification you need to do. Raw is just text
            string notificationType = "wns/raw";
            string contentType = "application/octet-stream";
            request.Headers.Add("X-WNS-Type", notificationType);
            request.ContentType = contentType;
            request.Headers.Add("Authorization", String.Format("Bearer {0}", accessToken.AccessToken));

            byte[] contentInBytes = Encoding.UTF8.GetBytes(message);
            using (Stream requestStream = request.GetRequestStream())
                requestStream.Write(contentInBytes, 0, contentInBytes.Length);
            try
            {
                using (HttpWebResponse webResponse = (HttpWebResponse)request.GetResponse())
                {
                    string code = webResponse.StatusCode.ToString();
                }
            }
            catch (Exception)
            {
                throw;
            }
        }

        public static OAuthToken GetAccessToken(string secret, string sid)
        {
            var urlEncodedSecret = HttpUtility.UrlEncode(secret);
            var urlEncodedSid = HttpUtility.UrlEncode(sid);

            var body =
              String.Format("grant_type=client_credentials&client_id={0}&client_secret={1}&scope=notify.windows.com", urlEncodedSid, urlEncodedSecret);

            string response;
            using (var client = new System.Net.WebClient())
            {
                client.Headers.Add("Content-Type", "application/x-www-form-urlencoded");
                response = client.UploadString("https://login.live.com/accesstoken.srf", body);
            }
            using (var ms = new MemoryStream(Encoding.Unicode.GetBytes(response)))
            {
                var ser = new DataContractJsonSerializer(typeof(OAuthToken));
                var oAuthToken = (OAuthToken)ser.ReadObject(ms);
                return oAuthToken;
            }
        }
    }

    [DataContract]
    internal class OAuthToken
    {
        [DataMember(Name = "access_token")]
        public string AccessToken { get; set; }
        [DataMember(Name = "token_type")]
        public string TokenType { get; set; }
    }
}

Next let’s change our console main method to include pushing a message to all registered clients when pressing ‘S’. In this simple sample we’ll just push the current server time.

static void Main(string[] args)
{
    //Start http service
    var svc = new HttpWebService();
    svc.Start();
    svc.ClientRegistered += svc_ClientRegistered;

    Console.WriteLine("Service started. Press CTRL-Q to quit.\nPress 'S' to push a message.");
    while (true)
    {
        var key = Console.ReadKey();
        if (key.Key == ConsoleKey.Q && key.Modifiers == ConsoleModifiers.Control)
            break;
        else if(key.Key == ConsoleKey.S)
        {
            Console.WriteLine();
            PushMessageToClients();
        }
    }
    //shut down
    svc.Stop();
}

private static void PushMessageToClients()
{
    if (registeredClients.Count == 0)
        return; //no one to push to

    //Message to send to all clients
    var message = string.Format("{{\"Message\":\"Current server time is: {0}\"}}", DateTime.Now);

    //Generate token for push

    //Set app package SID and client clientSecret (get these for your app from the developer portal)
    string packageSid = "ms-app://s-X-XX-X-XXXXXXXXX-XXXXXXXXXX-XXXXXXXXX-XXXXXXXXXX-XXXXXXXXXX-XXXXXXXXXX-XXXXXXXXXX";
    string clientSecret = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX";
    //Generate oauth token required to push messages to client
    OAuthToken accessToken = null;
    try
    {
        accessToken = PushHelper.GetAccessToken(clientSecret, packageSid);
    }
    catch (Exception ex)
    {
        Console.WriteLine("ERROR: Failed to get access token for push : {0}", ex.Message);
        return;
    }

    int counter = 0;
    //Push the message to all the clients
    foreach(var client in registeredClients)
    {
        try
        {
            PushHelper.Push(client.Value, accessToken, message);
            counter++;
        }
        catch (Exception ex)
        {
            Console.WriteLine("ERROR: Failed to push to {0}: {1}", client.Key, ex.Message);
        }
    }
    Console.WriteLine("Pushed successfully to {0} client(s)", counter);
}

Now run the console app and the phone/store app. First click the “register” button in your app, then in the console app click “S” to send a message. Almost instantly you should see the server time printed inside your app.

image

 

Note: We’re not using a background task, so this will only work while the app is running. In the next blogpost we’ll look at how to set this up, as well as how to create a live tile with a graph on it.

You can download all the source code here: Download (Remember to update the Packet SID/Client Secret and associate the app with your own store app).

geothoughtDenver Union Station guide

This is slightly off topic, but as a side project I have just put together a small web site which is a guide to all the cool new developments at Denver Union Station. If you live in (or are visiting) the Denver area and haven't checked out Union Station recently, you definitely should! And to make it not totally off topic, there will be an interactive map appearing on the site shortly, I just

Footnotes