3D Printed Geographies – Techniques and Examples

As a follow-up to my post on “Geospatial Data Preparation for 3D Printed Geographies” (19 Sept 2015), I am providing an update on the different approaches that I have explored with my colleague Dr. Claire Oswald for our one-year RECODE grant entitled “A 3D elevation model of Toronto watersheds to promote citizen science in urban hydrology and water resources”. The tools that we have used to turn geospatial data into 3D prints include the program heightmap2stl; direct loading of a grey scale image into the Cura 3D modeling software; the QGIS plugin DEMto3D; the script shp2stl.js; and a workflow using Esri’s ArcScene for 3D extraction, saving in VRML format, and translating this file into STL format using the MeshLab software.

The starting point: GIS and heightmap2stl

Being a GIS specialist with limited knowledge of 3D graphics or computer-aided design, all of the techniques used to make geospatial data printable rely heavily on the work of others, and my understanding of the final steps of data conversion and 3D print preparation is somewhat limited. With this in mind, the first approach to convert geospatial data, specifically a digital elevation model, used Markus Fussenegger’s Java program heightmap2stl, which can be downloaded from http://www.thingiverse.com/thing:15276/#files and used according to detailed instructions on “Converting DEMs to STL files for 3D printing” by James Dittrich of the University of Oregon. The process from QGIS or ArcGIS project to greyscale map image to printable STL file was outlined in my previous post at http://gis.blog.ryerson.ca/2015/09/19/geospatial-data-preparation-for-3d-printed-geographies/.

Quicker and not dirtier: direct import into Cura

The use of the heightmap2stl program in a Windows environment requires a somewhat cumbersome process using the Windows command line and the resulting STL files seemed exceedingly large, although I did not systematically investigate this issue. I was therefore very pleased to discover accidentally that the Cura software, which I am using with my Lulzbot Taz 5 printer, is able to load greyscale images directly.

The following screenshot shows the available parameters after clicking “Load Model” and selecting an image file (e.g. PNG format, not an STL file). The parameters include the height of the model, height of a base to be created, model width and depth within the available printer hardware limits, the direction of interpreting greyscale values as height (lighter/darker is higher), and whether to smoothen the model surface.

ormdem_oakridges_east_74m-Cura-import-settings

The most ‘popular’ model created using this workflow is our regional watershed puzzle. The puzzle consists of a baseplate with a few small watersheds that drain directly into Lake Ontario along with a set of ten separately printed watersheds, which cover the jurisdiction of the Toronto and Region Conservation Authority (TRCA).

Controlling geographic scale: QGIS plugin DEMto3D

Both of the first two approaches have a significant limitation for 3D printing of geography in that they do not support controlling geographic scale. To keep track of scale and vertical exaggeration, one has to calculate these values on the basis of geographic extent, elevation differential, and model/printer parameters. This is where the neat QGIS plugin DEMto3D comes into play.

As can be seen in the following screenshot, DEMto3D allows us to determine a print extent from the current QGIS project or layer extents; set geographic scale in conjunction with the dimension of the 3D print; specify vertical exaggeration; and set the height at the base of the model to a geographic elevation. For example, the current setting of 0m would print elevations above sea level while a setting of 73m would print elevations of the Toronto region in relation to the surface level of Lake Ontario. One shortcoming of DEMto3D is that vertical exaggeration oddly is limited to a factor of 10, which we found not always sufficient to visualize regional topography.

dem2stl-settings_150k-scale_10-exagg_0-base_humber-watershed

Using DEMto3D, we recently printed our first multi-part geography, a two-piece model of the Oak Ridges Moraine that stretches over 200km in east-west direction to the north of the City of Toronto and contains the headwaters of streams running south towards Lake Ontario and north towards Lake Simcoe and the Georgian Bay. To increase the vertical exaggeration for this print from 10x to 25x, we simply rescaled the z dimension in the Cura 3D printing software after loading the STL file.

Another Shapefile converter: shp2stl

The DEMto3D plugin strictly requires true DEM data (as far as I have found so far), thus it would not convert a Shapefile with building heights for the Ryerson University campus and surrounding City of Toronto neighbourhoods, which I wanted to print. Additionally, the approach using a greyscale image of campus building heights and one of the first two approaches above also did not work, as the 3D buildings represented in the resulting STL files had triangulated walls.

In looking for a direct converter from Shapefile geometries to STL, I found Doug McCune’s shp2stl script at https://github.com/dougmccune/shp2stl and his extensive examples and explanations in a blog post on “Using shp2stl to Convert Maps to 3D Models“. This script runs within the NodeJS platform, which needs to be installed and understood – the workflow turned out to be a tad too complicated for a time-strapped Windows user. Although I managed to convert the Ryerson campus using shp2stl, I never  printed the resulting model due to another, unrelated challenge of being unable to add a base plate to the model (for my buildings to stand on!).

Getting those walls straight: ArcScene, VMRL, and Meshlab

Another surprise find, made just a few days ago, enabled the printing of my first city model from the City of Toronto’s 3D massing (building height) dataset. This approach uses a combination of Esri’s ArcScene and the MeshLab software. Within ArcScene, I could load the 3D massing Shapefile (after clipping/editing it down to an area around campus using QGIS), define vertical extrusion on the basis of the building heights (EleZ variable), and save the 3D scene in the VRML format as a *.wrl (“world”) file. Using MeshLab, the VRML file could then be imported and immediately exported in STL format for printing.

While this is the only approach included in this post that uses a commercial tool, ArcScene, it is likely that the reader can find alternative workflow based on free/open-source software to extrude Shapefile polygons and turn them into STL, whether or not this requires the intermediate step through the VRML format.

GIS Day 2015 at Ryerson – A Showcase of Geographic Information System Research and Applications

Ryerson students, faculty, staff, and the local community are invited to explore and celebrate Geographic Information Systems (GIS) research and applications. Keynote presentations will outline the pervasive use of geospatial data analysis and mapping in business, municipal government, and environmental applications. Research posters, software demos, and course projects will further illustrate the benefits of GIS across all sectors of society.

Date: Wednesday, November 18, 2015
Time: 1:00pm-5:00pm
Location: Library Building, 4th Floor, LIB-489 (enter at 350 Victoria Street, proceed to 2nd floor, and take elevators inside the library to 4th floor)

Tentative schedule:

  • 1:00 Soft kick-off, posters & demos
  • 1:25 Welcome
  • 1:30-2:00 Dr. Namrata Shrestha, Senior Landscape Ecologist, Toronto & Region Conservation Authority
  • 2:00-2:30 posters & demos
  • 2:30-3:00 Andrew Lyszkiewicz, Program Manager, Information & Technology Division, City of Toronto
  • 3:00-3:30 posters & demos
  • 3:30-4:00 Matthew Cole, Manager, Business Geomatics, and William Davis, Cartographer and Data Analyst, The Toronto Star
  • 4:00 GIS Day cake!
  • 5:00 End

GIS Day is a global event under the motto “Discovering the World through GIS”. It takes place during National Geographic’s Geography Awareness Week, which in 2015 is themed “Explore! The Power of Maps”, and aligns with the United Nations-supported International Map Year 2015-2016.

Event co-hosted by the Department of Geography & Environmental Studies and the Geospatial Map & Data Centre. Coffee/tea and snacks provided throughout the afternoon. Contact: Dr. Claus Rinner

Geospatial Data Preparation for 3D Printed Geographies

I am collaborating with my colleague Dr. Claire Oswald on a RECODE-funded social innovation project aimed at using “A 3D elevation model of Toronto watersheds to promote citizen science in urban hydrology and water resources”. Our tweets of the first prototypes printed at the Toronto Public Library have garnered quite a bit of interest – here’s how we did it!

claire-dem-tweet_Aug21claus-popdens-tweet_Aug22

The process from geography to 3D print model includes four steps:

  1. collect geospatial data
  2. process and map the data within a geographic information system (GIS)
  3. convert the map to a 3D print format
  4. verify the resulting model in the 3D printer software

So far, we made two test prints of very different data. One is a digital elevation model (DEM) of the Don River watershed, the other represents population density by Toronto Census tracts. A DEM for Southern Ontario created by the Geological Survey of Canada was downloaded from Natural Resources Canada’s GeoGratis open data site at http://geogratis.gc.ca/. It came in a spatial resolution of 30m x 30m grid cells and a vertical accuracy of 3m.

The Don River watershed boundary from the Ontario Ministry of Natural Resources was obtained via the Ontario Council of University Libraries’ geospatial portal, as shown in the following screenshot.

Download of watershed boundary file

The population density data and Census tract boundaries from Statistics Canada were obtained via Ryerson University’s Geospatial Map and Data Centre at http://library.ryerson.ca/gmdc/ (limited to research and teaching purposes).

The Don River watershed DEM print was prepared in the ArcGIS software by clipping the DEM to the Don River watershed boundary selected from the quaternary watershed boundaries. The Don River DEM was visualized in several ways, including the “flat” greyscale map with shades stretched between actual minimum and maximum values, which is needed for conversion to 3D print format, as well as the more illustrative “hillshade” technique with semi-transparent land-use overlay (not further used in our 3D project).

DEM of Don River watershedHillshade of Don River valley at Thorncliffe Park

The population density print was prepared in the free, open-source QGIS software. A choropleth map with a greyscale symbology was created, so that the lighter shades represented the larger population density values (yes, this is against cartographic design principles but needed here). A quantile classification with seven manually rounded class breaks was used, and the first class reserved for zero population density values (Census tracts without residential population).

qgis-3D-popdens-project

In QGIS’ print composer, the map was completed with a black background, a legend, and a data source statement. The additional elements were kept in dark grey so that they would be only slightly raised over the black/lowest areas in the 3D print.

qgis-3D-popdens-composer

The key step of converting the greyscale maps from the GIS projects to 3D print-compliant STL file format was performed using a script called “heightmap2stl.jar” created by Markus Fussenegger. The script was downloaded from http://www.thingiverse.com/thing:15276/#files, and used with the help of instructions written by James Dittrich of the University of Oregon, posted at http://adv-geo-research.blogspot.ca/2013/10/converting-dems-to-stl-files-for-3d.html. Here is a sample run with zero base height and a value of 100 for the vertical extent.

Command for PNG to STL conversion

The final step of pre-print processing involves loading the STL file into the 3D printer’s proprietary software to prepare the print file and check parameters such as validity of the structure, print resolution, fill options for hollow parts, and overall print duration. At the Toronto Public Library, 3D print sessions are limited to two hours. The following screenshot shows the Don River DEM in the MakerBot Replicator 2 software, corresponding to the printer used in the Library. Note that the model shown was too large to be printed in two hours and had to be reduced below the maximum printer dimensions.

Don River watershed model in 3D printing software

The following photo by Claire Oswald shows how the MakerBot Replicator 2 in the Toronto Reference Library’s digital innovation hub prints layer upon layer of the PLA plastic filament for the DEM surface and the standard hexagonal fill of cavities.

DEM in printing process - photo by C. Oswald

The final products of our initial 3D print experiments have dimensions of approximately 10-20cm. They have made the rounds among curious-to-enthusiastic students and colleagues. We are in the process of improving model quality, developing additional models, and planning for their use in environmental education and public outreach.

The printed Don River watershed model

3D-printed Toronto population density map

Looking for a secure, laid-back, and meaningful job in a growing field? Get into Geography!

This text was first posted as a guest contribution to WhyRyerson?, the Undergraduate Admissions and Recruitment blog at Ryerson University. Images were added after the initial posting.

Geography@Ryerson is different. Atlases, globes, and Google Maps are nice pastimes, but we are more interested in OpenStreetMap, CartoDB, and GeoDA. We map global flight paths, tweets, invasive species, and shoplifters. As a student in Geographic Analysis you will gain real-world, or rather real-work, experience during your studies. This degree is unique among Geo programs in Ontario, if not in Canada, for its career focus.

thestar-24May2013_rye-student-flight-paths

Mapping global flight paths.
(Source: Toronto Star, 24 May 2013
)

The BA in Geographic Analysis has a 40-year record of placing graduates in planning and decision-making jobs across the public and private sectors. Jobs include Data Technician, Geographic Information Systems (GIS) Specialist, Geospatial Analyst, Mapping Technologist, GIS Consultant, Environmental Analyst, Market Research Analyst, Real-Estate Analyst, Crime Analyst, and many more. You name the industry or government branch, we’ll tell you what Geographers are doing for them. And these jobs are secure: Many are within government, or, if they are in the private sector, they tend to be in units that make businesses more efficient (and therefore are essential themselves!).

And these are great jobs, too. In November 2013, GIS Specialists were characterized as a low-stress job by CNN Money/PayScale. There were half a million positions in the US, with an expected 22% growth over 10 years, and a median pay of US$53,400 per year. In their previous survey, Market Research Analysts had made the top-10, with over a quarter million jobs, over 40% expected growth, and a median pay of US$63,100. The 2010 survey described GIS Analyst as a stress-free job with a median salary of US$75,000.

cdnbusiness-23April2015_best-jobs-mapping-tech

Mapping Technologist, one of Canada’s best jobs!
(Source: Canadian Business, 23 April 2015)

Closer to home, in April 2015 Canadian Business magazine put Mapping Technologists among the top-10 of all jobs in Canada! They note that “The explosion of big data and the growing need for location-aware hardware and software has led to a boom in the field of mapping”. With a median salary of CA$68,640, a 25% salary growth, and a 20% increase in jobs over five years, “this class of technology workers will pave the way”. According to Service Canada, “Mapping and related technologists and technicians gather, analyze, interpret and use geospatial information for applications in natural resources, geology, environment and land use planning. […] They are employed by all levels of government, the armed forces, utilities, mapping, computer software, forestry, architectural, engineering and consulting firms”. Based on the excellent reputation of our program in the Toronto area, you can add the many jobs in the business, real-estate, social, health, and safety fields to this list!

google-img-search_gis-application-examples

Sample applications of Geographic Analysis
(Source: Google image search)

While you may find the perspective of a well-paid, laid-back job in a growing field attractive enough, there is more to being a Ryerson-trained Geographer. Your work will help make important decisions in society. This could be with the City of Toronto or a Provincial or Federal ministry, where you turn geospatial data into maps and decision support tools in fields such as environmental assessment, social policy, parks and forestry, waste management, immigration, crime prevention, natural resources management, utilities, transportation, … . Or, you may find yourself analysing socio-economic data and crime incidents for a regional police service in order to guide their enforcement officers, as well as crime prevention and community outreach activities. Many of our graduates work for major retail or real-estate companies determining the best branch locations, efficient delivery of products and services, or mapping and forecasting population and competitors. Or you could turn your expertise into a highly profitable free-lance GIS and mapping consultancy.

Geography is one of the broadest fields of study out there, which can be intimidating. Geography@Ryerson however is different, as we provide you with a “toolkit” to turn your interest in the City, the region, and the world, and your fascination with people and the environment, into a fulfilling, secure, laid-back, yet meaningful job!

Toronto elevation model in Minecraft

Minecraft is a fascinating video game that remains popular with the pre-teen, teen, and post-teen crowds. You build and/or exploit a 3D world by manipulating blocks of various materials such as “stone”, “dirt”, or “sand”. In the footsteps of my colleague Pamela Robinson in the School of Urban and Regional Planning, and her student Lisa Ward Mather, I became interested in ‘serious’ applications of Minecraft. Lisa studied the use of the game as a civic engagement tool. Apparently, the blocky 3D nature of Minecraft worlds can be useful in planning to give viewers an idea of planned building volumes while making it clear that preliminary display are not architectural plans.

Taking a geographic perspective, I am interested in the potential of Minecraft to educate kids about larger areas, say the City of Toronto. In this post, I outline the conversion of a digital elevation model (DEM) into a Minecraft terrain. I imagine the output as a novel way for ‘gamers’ to explore and interact with the city’s topography. Some pointers to related, but not Toronto-specific work include:

  • GIS StackExchange discussion on “Bringing GIS data into Minecraft“, including links to the UK and Denmark modeled in Minecraft
  • A video conversation about “Professional Minecraft GIS“, where Ulf Mansson combined OpenStreetMap and open government data
  • Workflow instructions for converting “Historical Maps into Minecraft” using WorldPainter, which automatically converts DEMs into Minecraft terrain (if I had seen this before I started implementing the Python script outlined below…)
  • An extensive webinar on “Geospatial and Minecraft” by FME vendor Safe Software, touching on creating Minecraft worlds from DEMs, GPS, LiDAR, building information management, and the rule-based CityEngine software

The source data for my modest pilot project came from the Canadian Digital Elevation Model (CDEM) by Natural Resources Canada, accessed using the GeoGratis Geospatial Data Extraction tool at http://geogratis.gc.ca/site/eng/extraction. In QGIS, I converted the GeoTIFF file to ASCII Grid format, which has the advantage of being human-readable. I also experimented with clipping parts from the full DEM and/or reducing the raster resolution, since the first attempts at processing would have taken several hours. The QGIS 2.2 raster translate or clip operations ran a GDAL function along the following lines (see http://www.gdal.org/gdal_translate.html and http://www.gdal.org/formats_list.html for details):

gdal_translate -projwin [xmin ymin xmax ymax] -outsize 25% 25% -of AAIGrid [input_file.tif] [output_file.asc]

On the Minecraft side, you need an account (for a small cost), a working copy of the game, and an installation of MCEdit. Player accounts are sold and managed by the game’s developer company, Mojang, see https://minecraft.net/store/minecraft. The Minecraft software itself is launched from the Web – don’t ask about the details but note that I am using version 1.8.7 at the time of writing. MCEdit is a free tool for editing saved Minecraft worlds. It has an option to add functionality through so-called ‘filters’.

The MCEdit filter I wrote is “dem_gen.py”, a Python script that collects a few input parameters from the user and then reads an ASCII GRID file (currently hard-coded to the above-mentioned Toronto area DEM), iterates through its rows (x direction) and columns (y direction in GIS, z in Minecraft), and recreates the DEM in Minecraft as a collection of ‘columns’ (z direction in GIS, y in Minecraft). Each terrain column is made of stone at the base and dirt as the top-most layer(s), or of other user-defined materials.

I have freshly uploaded the very first version 0.1 to GitHub, see https://github.com/crinner/mc_dem_gen. (This also serves as my first developer experience with GitHub!) The general framework for an MCEdit filter and the loop creating the new blocks were modified from the “mountain_gen.py” (Mountain Generator) filter found at http://www.mediafire.com/download.php?asfkqo3hk0lkv1f. The filter is ‘installed’ by placing it in the filter subfolder in the MCEdit installation. The process then simply involves creating an empty world (I used a superflat world with only a bedrock layer) and running the DEM Generator filter. To run any filter in MCEdit, select an area of the world, press ‘5’, and select the filter from the list.

QGIS-screenshot_minecraft-DEM-project

Converting the 2,400 by 1,600 pixel CDEM dataset shown in the above screenshot of my QGIS project took about half a day on a middle-aged Dell Latitude E6410 laptop.  The screenshot below shows that many data “chunks” are missing from this preliminary result, perhaps an issue when saving the terrain in MCEdit.

toronto-dem1_chunky

With a coarser DEM resolution of 600 by 400 pixels and using a newer Dell XPS 12 tablet (!), the processing time was reduced to 10 or so minutes and the result is promising. In the following screenshots, we are – I believe – looking at the outlets of the Humber River and Don River into Lake Ontario. Note the large vertical exaggeration that results from the horizontal dimensions being shrunk from around 1 block = 20m to 1 block = 80m, while vertically 1 block corresponds to 5m.

2015-06-08_21.24.32

2015-06-08_21.25.35

There remain a number of challenges, including a problem translating the geographic x/y/z coordinate system into the game’s x/-z/y coordinate system – the terrain currently is not oriented properly. More thought also has to be put into the scaling of the horizontal dimensions vis-a-vis the vertical dimension, adding the Lake Ontario water level, and creating signs with geographic names or other means of orientation. Therefore, your contributions to the GitHub project are more than welcome!

Update, 10 June 2015: I was made aware of the #MinecraftNiagara project, which Geospatial Niagara commissioned to students in the Niagara College GIS program. They aim to create “a 1:1 scale representation of Niagara’s elevation, roads, hydrology and wooded areas” to engage students at local schools with the area’s geography. It looks like they used ArcGIS and the FME converter, as described in a section of this blog post: http://geospatialniagara.com/backlog-of-updates/. Two screenshots of the Lower Balls Falls near St. Catharines were provided by @geoniagara’s Darren Platakis (before and after conversion):

minecraftNiagara-screenshot1    minecraftNiagara-screenshot2

 

My takeaways from AAG 2015

The 2015 Annual Meeting of the Association of American Geographers (AAG) in Chicago is long gone – time for a summary of key lessons and notable ideas taken home from three high-energy conference days.

Choosing which sessions to attend, was the first major challenge, as there were over ninety (90!) parallel sessions scheduled in many time slots. I put my program together based on presentations by Ryerson colleagues and students (https://gis.blog.ryerson.ca/2015/04/17/ryerson-geographers-at-aag-2015/) and those given by colleagues and students of the Geothink project (http://geothink.ca/american-associaton-of-geographers-aag-2015-annual-meeting-geothink-program-guide/), as well as by looking through the presenter list and finding sessions sponsored by select AAG specialty groups (notably GIScience and Cartography). Abstracts for the presentations mentioned in this blog can be found via the “preliminary” conference program at http://meridian.aag.org/callforpapers/program/index.cfm?mtgID=60.

Upon arrival, I was impressed by the size and wealth of the industrial and transportation infrastructure in Chicago as well as the volume of the central business district, as seen from the airport train and when walking around in the downtown core.

aag-photo1 aag-photo3

My conference started on Wednesday, 22 April 2015, with Session 2186 “Cartography in and out of the Classroom: Current Educational Practices“. In a diverse set of presentations, Pontus Hennerdal from Stockholm University presented an experiment with a golf-like computer game played on a Mercator-projected world map to help children understand map projections. Pontus also referred to the issue of “world map continuity” using an animated film that is available on his homepage at http://www.su.se/profiles/poer5337-1.188256. In the second presentation, Jeff Howarth from Middlebury College assessed the relationship between spatial thinking skills of students and their ability to learn GIS. This research was motivated by an anonymous student comment about a perceived split of GIS classes into those students who “get it” vs. those who don’t. Jeff notes that spatial thinking along with skills in orientation, visualization, and a sense of direction sets students up for success in STEM (science, technology, engineering, math) courses, including GIS. Next was Cindy Brewer, Head of the Department of Geography at Penn State University, with an overview of additions and changes to the 2nd edition of her Esri Press book “Designing Better Maps”. The fourth presentation was given by David Fairbairn of Newcastle, Chair of the Commission on Education and Training of the International Cartographic Association. David examined the accreditation of cartography-related programs of study globally, and somewhat surprisingly, reported his conclusion that cartography may not be considered a profession and accreditation would bring more disadvantages (incl. management, liability, barriers to progress) than benefits to the discipline. Finally, Kenneth Field of Esri took the stage to discuss perceptions and misconceptions of cartography and the cartographer. These include the rejection of the “map police” when trained cartographers dare to criticize the “exploratory playful” maps created by some of today’s map-makers (see my post at http://gis.blog.ryerson.ca/2015/04/04/about-quick-service-mapping-and-lines-in-the-sand/).

A large part of the remainder of Wednesday was spent in a series of sessions on “Looking Backwards and Forwards in Participatory GIS“. Of particular note the presentations by Renee Sieber, professor of many things at McGill and leader of the Geothink SSHRC Partnership Grant (http://www.geothink.ca), and Mike McCall, senior researcher at Universidad Nacional Autonoma de Mexico. Renee spoke thought-provokingly, as usual, about “frictionless civic participation”. She observes how ever easier-to-use crowdsourcing tools are reducing government-citizen interactions to customer relationships, and participation is becoming a product being delivered efficiently, rather than a democratic process that engages citizens in a meaningful way. Mike spoke about the development of Participatory GIS (PGIS) in times of volunteered geographic information (VGI) and crowdsourcing, arguing to operationalize VGI within PGIS. The session also included a brief discussion among members of the audience and presenters about the need for base maps or imagery as a backdrop for PGIS – an interesting question, as my students and I are arguing that “seed contents” will help generate meaningful discussion, thus going even beyond including just a base map. Finally, two thoughts brought forward by Muki Haklay of University College London: Given the “GIS chauffeurs” of early-day PGIS projects, he asked whether we continue to need such facilitators in times of Renee Sieber’s frictionless participation? And, he observed that the power of a printed map brought to a community development meeting is still uncontestable. Muki’s extensive raw notes from the AAG conference can be found on his blog at https://povesham.wordpress.com/.

In the afternoon, I dropped in to Session 2478, which celebrated David Huff’s contribution to applied geography and business. My colleague Tony Hernandez chaired and co-organized the session, in which Tony Lea, Senior VP Research of Toronto-based Environics Analytics and instructor in our Master of Spatial Analysis (MSA) program, and other business geographers paid tribute to the Huff model for predicting consumers’ spatial behaviour (such as the probability of patronizing specific store locations). Members of the Huff family were also present to remember the man behind the model, who passed away in Summer 2014. A written tribute by Tony Lea can be found at http://www.environicsanalytics.ca/footer/news/2014/09/04/a-tribute-to-david-huff-the-man-and-the-model.

Also on my agenda was a trip to the AAG vendor expo, where I was pleased to see my book – “Multicriteria Decision Analysis in Geographic Information Science” – in the Springer booth!

aag-springer-books

Thursday, 23 April 2015, began with an 8am session on “Spatial Big Data and Everyday Life“. In a mixed bag of presentations, Till Straube of Goethe University in Frankfurt asked “Where is Big Data?”; Birmingham’s Agnieszka Leszczynski argued that online users are more concerned with controlling their personal location data than with how they are ultimately used; Kentucky’s Matt Wilson showed select examples from half a century of animated maps that span the boundary between data visualization and art; Monica Stephens of the University at Buffalo discussed the rural exclusions of crowdsourced big data and characterized Wikipedia articles about rural towns in the US as Mad Libs based on Census information; and finally, Edinburgh’s Chris Speed conducted an IoT self test, in which he examined the impact of an Internet-connected toilet paper holder on family dynamics…

The remainder of Thursday was devoted to CyberGIS and new directions in mapping. The panel on “Frontiers in CyberGIS Education” was very interesting in that many of the challenges reported in teaching CyberGIS really are persistent challenges in teaching plain-old GIS. For example, panelists Tim Nyerges, Wenwen Li, Patricia Carbajalas, Dan Goldberg, and Britta Ricker noted the difficulty of getting undergraduate students to take more than one or two consecutive GIS courses; the challenge of teaching advanced GIS concepts such as enterprise GIS and CyberGIS (which I understand to mean GIS-as-a-service); and the nature of Geography as a “discovery major”, i.e. a program that attracts advanced students who are struggling in their original subjects. One of the concluding comments from the CyberGIS panel was a call to develop interdisciplinary, data-centred program – ASU’s GIScience program was named as an example.

Next, I caught the first of two panels on “New Directions in Mapping“, organized by Stamen’s Alan McConchie, Britta Ricker of U Washington at Tacoma, and Kentucky’s Matt Zook. A panel consisting of representative of what I call the “quick-service mapping” industry (Google, Mapbox, MapZen, Stamen) talked about job qualifications and their firms’ relation to academic teaching and research. We heard that “Geography” has an antiquated connotation and sounds old-fashioned, that the firms use “geo” to avoid the complexities of “geography”, and that geography is considered a “niche” field. My hunch is that geography is perhaps rather too broad (and “geo” even broader), but along with Peter Johnson’s (U Waterloo) comment from the audience, I must also admit that you don’t need to be a geographer to make maps, just like you don’t have to be a mathematician to do some calculations. Tips for students interested in working for the quick-service mapping industry included to develop a portfolio, practice their problem-solving and other soft skills, and know how to use platforms such as GitHub (before learning to program). A telltale tweet summarizing the panel discussion:

Thursday evening provided an opportunity to practice some burger cartography. It was time for the “Iron Sheep” hackathon organized by the FloatingSheep collective of academic geographers. Teams of five were given a wild dataset of geolocated tweets and a short 90-or-so minute time frame to produce some cool & funny map(s) and win a trophy for the best or worst or inbetween product. It was interesting to see how a group of strangers new to the competition and with no clue about how to get started, would end up producing a wonderful map such as this :-)

aag-sheep-map2

My last day at AAG 2015, Friday, April 24, took off with a half-day technical workshop on “Let’s Talk About Your Geostack”. The four active participants got a tremendous amount of attention from instructor-consultant @EricTheise. Basically, I went from zero to 100 in terms of having PostgreSQL, PostGIS, Python, NodeJS, and TileMill installed and running on my laptop – catching up within four hours with the tools that some of my students have been talking about, and using, in the last couple of years!

In the afternoon, attention turned to OpenStreetMap (OSM), with a series of sessions organized by Muki Haklay, who argues that OSM warrants its own branch of research, OpenStreetMap Studies. I caught the second session which started with Salzburg’s Martin Loidl showing an approach in development to detect and correct attribute (tag) inconsistencies in OSM based on information contained in the OSM data set (intrinsic approach). Geothink co-investigator Peter Johnson of UWaterloo presented preliminary results of his study of OSM adoption (or lack thereof) by municipal government staff. In eight interviews with Canadian city staff, Peter did not find a single official use of OSM. Extensive discussions followed the set of four presentations, making for a highly informative session. One of the fundamental questions raised was whether OSM is distinct enough from other VGI and citizen science projects that it merits its own research approach. While typically considered one of the largest crowdmapping projects, it was noted that participation is “shallow” (Muki Haklay) with only 10k active users among 2 million registered users. Martin Loidl had noted that OSM is focused on geometry data, yet with a flat structure and no standards other than those agreed-upon via the OSM wiki. Alan McConchie added the caution that OSM contributions only make it onto the map if they are included in the “style” files used to render OSM data. Other issues raised by Alan included the privacy of contributors and questions about authority. For example, contributors should be aware of the visualization and statistics tools developed by Pascal Neis at http://neis-one.org/! We were reminded that Muki Haklay has developed a code of engagement for researchers studying OSM (read the documentation, experience actively contributing, explore the data, talk to the OSM community, publish open access, commit to knowledge transfer). Muki summarized the debate by suggesting that academics should act as “critical friends” vis-à-vis the OSM community and project. To reconcile “OSM Studies” with VGI, citizen science, and the participatory Geoweb, I’d refer to the typology of user contributions developed by Rinner & Fast (2014). In that paper, we do in fact single out OSM (along with Wikimapia) as a “crowd-mapping” application, yet within a continuum of related Geoweb applications.

Notes from #NepalQuake Mapping Sessions @RyersonU Geography

This is a brief account of two “Mapping for Nepal” sessions at Ryerson University’s Department of Geography and Environmental Studies. In an earlier post found at http://gis.blog.ryerson.ca/2015/04/27/notes-for-nepalquake-mapping-sessions-ryersonu-geography/, I collected information on humanitarian mapping for these same sessions.

Mapathon @RyersonU, Geography & Spatial on Monday, 27 April 2015, 10am-2pm. 1(+1) prof, 2 undergrads, 3 MSAs, 1 PhD, 1 alumnus came together two days after the devastating earthquake to put missing roads, buildings, and villages in Nepal on the map using the Humanitarian OpenStreetMap Team’s (HOT) task manager. Thank you to MSA alumnus Kamal Paudel for initiating and co-organizing this and the following meetings.

hotosm-for-nepal_msa-lab_27april2015

Mapathon @RyersonU, Geography & Spatial on Sunday, 3 May 2015, 4pm-8pm. Our second Nepal mapathon brought together a total of 15 volunteers, including undergraduate BA in Geographic Analysis and graduate Master of Spatial Analysis (MSA) students along with MSA alumni, profs, and members of the Toronto-area GIS community. On this Sunday afternoon we focused on completing and correcting the road/track/path network and adding missing buildings to the map of Nepal’s most affected disaster zones. Photos via our tweets:

 

My observations and thoughts from co-organizing and leading these sessions, and participating in the HOT/OSM editing:

  • In addition to supporting the #EqResponseNp in a small way, the situation provided an invaluable learning opportunity for everyone involved. Most participants of our sessions had never contributed to OSM, and some did not even know of its existence, despite being Geography students or GIS professionals. After creating OSM accounts and reading up on the available OSM and Nepal-specific documentation, participants got to map hundreds of points, lines, or polygons within just a couple of hours.
  • The flat OSM data model – conflating all geometries and all feature types in the same file – together with unclear or inconsistent tagging instructions for features such as roads, tracks, and paths challenged our prior experience with GIS and geographic data. Students in particular were concerned about the fact that their edits would go live without “someone checking”.
  • While the HOT task manager and general workflow of choosing, locking, editing, and saving an area was a bit confusing at first, the ID editor used by most participants was found to be intuitive and was praised by GIS industry staff as “slick”.
  • The most recent HOT tasks were marked as not suitable for beginners after discussions among the OSM community about poor-quality contributions, leaving few options for (self-identified) beginners. It was most interesting to skim over the preceding discussion on the HOT chat and mailing list, e.g. reading a question about “who we let in”. I am not sure how the proponent would define “we” in a crowd-mapping project such as OSM.
  • There was a related Twitter #geowebchat on humanitarian mapping for Nepal: “How can we make sure newbies contribute productively?”, on Tuesday, 5 May 2015 (see transcript at http://mappingmashups.net/2015/05/05/geowebchat-transcript-5-may-2015-how-can-newbies-contribute-productively-to-humanitarian-mapping/).
  • The HOT tasks designated for more experienced contributors allowed to add post-disaster imagery as a custom background. I was not able to discern whether buildings were destroyed or where helicopters could land to reach remote villages, but I noticed numerous buildings (roofs) that were not included in the standard Bing imagery and therefore missing from OSM.
  • The GIS professionals mentioned above included two analysts with a major GIS vendor, two GIS analysts with different regional conservation authorities, a GIS analyst with a major retail chain, and at least one GIS analyst with a municipal planning department (apologies for lack of exact job titles here). The fact that these, along with our Geography students, had mostly not been exposed to OSM is a concern, which however can be easily addressed by small changes in our curricula or extra-curricular initiatives. I am however a bit concerned as to whether the OSM community will be open to collaborating with the #GIStribe.
  • With reference to the #geowebchat, I’d posit that newbie != newbie. Geographers can contribute a host of expertise around interpreting features on the ground, even if they have “never mapped” (in the OSM sense of “mapping”). Trained GIS experts understand how feature on the ground translate into data items and cannot be considered newbies either. In addition, face-to-face instructions by, and discussion with, experienced OSM contributors would certainly help to achieve a higher efficiency and quality of OSM contributions. In this sense, I am hoping that we will have more crowd-mapping sessions @RyersonU Geography, for Nepal and beyond.

Notes for #NepalQuake Mapping Sessions @RyersonU Geography

This is an impromptu collection of information to support a series of meetings of Ryerson students, faculty, and alumni of the Department of Geography and Environmental Studies with getting started with OpenStreetMap (OSM) improvements for Nepal. As part of the international OSM community’s response, contributions may help rescuers and first-responders to locate victims of the devastating earthquake.

Note that I moved the reports on our mapping sessions out into a separate post at http://gis.blog.ryerson.ca/2015/05/04/notes-from-nepalquake-mapping-sessions-ryersonu-geography/.

Information from local mappers: Kathmandu Living Labs (KLL), https://www.facebook.com/kathmandulivinglabs. KLL’s crowdmap for reports on the situation on the ground: http://kathmandulivinglabs.org/earthquake/

Humanitarian OpenStreetMap Team (HOT): http://hotosm.org/, http://wiki.openstreetmap.org/wiki/2015_Nepal_earthquake

Guides on how to get started with mapping for Nepal:

Communications among HOT contributors worldwide: https://kiwiirc.com/client/irc.oftc.net/?nick=mapper?#hot. Also check @hotosm and #hotosm on Twitter.

Things to consider when mapping:

  • When you start editing, you are locking “your” area (tile) – make sure you tag along, save your edits when you are done, provide a comment on the status of the map for the area, and unlock the tile.
  • Please focus on “white” tiles – see a discussion among HOT members on the benefits and drawbacks of including inexperienced mappers in the emergency situation, http://thread.gmane.org/gmane.comp.gis.openstreetmap.hot/7540/focus=7615 (via @clkao)
  • In the meantime (May 3rd), some HOT tasks have been designated for “more experienced mappers” and few unmapped areas are left in other tasks; you can however also verify completed tiles or participate in tasks marked as “2nd pass” in order to improve on previous mapping.
  • Don’t use any non-OSM/non-HOT online or offline datasets or services (e.g. Google Maps), since their information cannot be redistributed under the OSM license
  • Don’t over-estimate highway width and capacity, consider all options (including unknown road, track, path) described at http://wiki.openstreetmap.org/wiki/Nepal/Roads. Here is a discussion of the options, extracted from the above-linked IRC (check for newer discussions on IRC or HOT email list):

11:23:18 <ivansanchez> CGI958: If you don’t know the classification, it’s OK to tag them as highway=track for dirt roads, and highway=road for paved roads

11:26:06 <SK53> ivansanchez: highway=road is not that useful as it will not be used for routers, so I would chose unclassified or track

12:31:12 <cfbolz> So track is always preferable, if you don’t have precise info?
12:32:11 <cfbolz> Note that the task instructions directly contradict this at the moment: “highway=road Roads traced from satellite imagery for which a classification has not been determined yet. This is a temporary tag indicating further ground survey work is required.”

Another example of a discussion of this issue: http://www.openstreetmap.org/changeset/30490243

  • Map only things that are there, not those that may/could be there. Example: Don’t map a helipad object if you spot an open area that could be used for helicopter landing, create a polygon with landuse=grass instead (thanks to IRC posters SK53 and AndrewBuck).
  • Buildings as point features vs. residential areas (polygons): To expedite mapping, use landuse=residential, see IRC discussion below.
    hotosm_how-to-map-remote-buildings
    More about mapping buildings: http://wiki.openstreetmap.org/wiki/Nepal_remote_mapping_guide
  • Be aware that your edits on OSM are immediately “live” (after saving) and become part of the one and only OSM dataset. In addition, your work can be seen by anyone and may be analyzed in conjunction with your user name and locations (and thus potentially with your personal identity)

Note that I am a geographer (sort of) and GIScientist, but not an OpenStreetMap expert (yet). If you have additions or corrections to the above, let me know!

About Quick-Service Mapping and Lines in the Sand

A walk on the beach along the still-frozen Georgian Bay has helped me sort some thoughts regarding fast food cartography, quick-service mapping, and naturally occurring vs. artificial lines in the sand … but first things first: This post refers to a debate about Twitter mapping and neo-cartography that is raging on blogs across the planet and will flare up in the Geoweb chat on Twitter this Tuesday, https://twitter.com/hashtag/geowebchat. Update: #geowebchat transcript prepared by Alan McConchie available at http://mappingmashups.net/2015/04/07/geowebchat-transcript-7-april-2015-burger-cartography/.

Lines in the sand (Photos: Claus Rinner)
Lines in the sand (Photos: Claus Rinner)

A few days ago, The Atlantic’s CityLab published an article entitled “Why Most Twitter Maps Can’t Be Trusted”, http://www.citylab.com/housing/2015/03/why-most-twitter-maps-cant-be-trusted/388586/. There have been other cautions that Twitter maps often just show where people live or work – and thus where they tweet. Along similar lines, a comic at xkcd illustrates how heatmaps of anything often just show population concentrations – “The business implications are clear!”, https://xkcd.com/1138/.

The CityLab article incited Andrew Hill, senior scientist at CartoDB and mapping instructor at New York University, to respond with a polemic “In defense of burger cartography”, http://andrewxhill.com/blog/2015/03/28/in-defense-of-burger-cartography/. In it, Hill replies to critics of novel map types by stating “The dogma of cartography is certain to be overturned by new discoveries, preferences, and norms from now until forever.” He likens the good people at CartoDB (an online map service) with some action movie characters who will move cartography beyond its “local optima [sic]”. Hill offers his personal label for the supposedly-new “exploratory playfulness with maps”: burger cartography.

Examples of CartoDB-based tweet maps in the media (Source: Taylor Shelton)
Examples of CartoDB-based tweet maps in the media (Source: Taylor Shelton)

The core portion of Hill’s post argues that CartoDB’s Twitter maps make big numbers such as 32 million tweets understandable, as in the example of an animated map of tweets during the 2014 soccer world cup final. I find nothing wrong with this point, as it does not contradict the cautions against wrong conclusions from Twitter maps. However, the rest of Hill’s post is written in such a derogatory tone that it has drawn a number of well-thought responses from other cartographers:

  • Kenneth Field, Senior Cartographic Product Engineer at Esri and an avid blogger and tweeter of all things cartography, provides a sharp, point-by-point rebuttal of Hill’s post – lamenting the “Needless lines in the sand”, http://cartonerd.blogspot.co.uk/2015/03/needless-lines-in-sand.html. The only point I disagree with is the title, since I think we actually do need some lines in the sand (see below).
  • James Cheshire, Lecturer and geospatial visualization expert at University College London, Department of Geography, supports “Burger Cartography”, http://spatial.ly/2015/03/burger-cartography/, but shows that “Hill’s characterisation of cartography … is just wrong”.
  • Taylor Shelton, “pseudopositivist geographer”, PhD candidate at Clark University, and co-author of the study that triggered this debate, writes “In defense of map critique”, https://medium.com/@kyjts/in-defense-of-map-critique-ddef3d5e87d5. Shelton reveals Hill’s oversimplification by pointing to the need to consider context when interpreting maps, and to the “plenty of other ways that we can make maps of geotagged tweets without just ‘letting the data speak for themselves’.”

Extending the fast food metaphor, CartoDB can be described as a quick-service mapping platform – an amazing one at that, which is very popular with our students (more on that in a future post). I am pretty sure that CartoDB’s designers and developers generally respect cartographic design guidelines, and in fact have benefited commercially from implementing them. However, most of us do not live from fast food (= CartoDB, MapBox, Google Maps) alone. We either cook at home (e.g., R with ggplot2, QGIS; see my previous post on recent Twitter mapping projects by students) or treat ourselves to higher-end cuisine (e.g., ArcMap, MapInfo, MAPublisher), if we can afford it.

I fully expect that new mapping pathways, such as online public access to data and maps, crowdmapping, and cloud-based software-as-a-service, entail novel map uses, to which some existing cartographic principles will not apply. But dear Andrew Hill, this is a natural evolution of cartography, not a “goodbye old world”! Where the established guidelines are not applicable, we will need new ones – surely CartoDB developers and CartoDB users will be at the forefront of making these welcome contributions to cartography.

MacEachren's Some Truth with Maps (Source: Amazon.com)
MacEachren’s Some Truth with Maps (Source: Amazon.com)

While I did not find many naturally occurring lines in the Georgian Bay sand this afternoon, I certainly think society needs to draw lines, including those that distinguish professional expertise from do-it-yourselfism. I trust trained map-makers (such as our Geographic Analysis and Spatial Analysis graduates!) to make maps that work and are as truthful as possible. We have a professional interest in critically assessing developments in GIS and mapping technologies and taking them up where suitable. The lines in the sand will be shifting, but to me they will continue to exist: separating professional and DIY cartographers, mapping for presentation of analysis results vs. exploratory playing with maps, quantitative maps vis-a-vis the map as a story … Of course, lines in the sand are pretty easy to cross, too!

Twitter Analytics Experiments in Geography and Spatial Analysis at Ryerson

In my Master of Spatial Analysis (MSA) course “Cartography and Geographic Visualization” in the Fall 2014 semester, three MSA students experimented with geospatial analysis of tweets. This post provides a brief account of the three student projects and ends with a caution about mapping and spatially analyzing tweets.

Yishi Zhao wrote her “mini research paper” assignment about “Exploring the Thematic Patterns of Twitter Feeds in Toronto: A Spatio-Temporal Approach”. Yishi’s goal was to identify the spatial and thematic patterns of geolocated tweets in Toronto at different times of day, as well as to explore the use of R for spatio-temporal analysis of the Twitter stream. Within the R platform, Yishi used the streamR package to collect geolocated tweets for the City of Toronto and mapped them by ward using a combination of MapTools, GISTools, and QGIS. Additionally, the tm package was used for text mining and to generate word clouds of the most frequent words tweeted at different times of the day.

Toronto tweets per population at different times of day - standard-deviation classification (Source: Yishi Zhao)
Toronto tweets per population at different times of day – standard-deviation classification (Source: Yishi Zhao)
Frequent words in Toronto tweets at different times of day (Source: Yishi Zhao)
Frequent words in Toronto tweets at different times of day (Source: Yishi Zhao)

One general observation is that the spatial distribution of tweets (normalized by residential population) becomes increasingly concentrated in downtown throughout the day, while the set of most frequent words expands (along with the actual volume of tweets, which peaked in the 7pm-9pm period).

MSA student Alexa Hinves pursued a more focused objective indicated in her paper’s title, “Twitter Data Mining with R for Business Analysts”. Her project aimed to examine the potential of geolocated Twitter data towards branding research using the example of singer Taylor Swift’s new album “1989”. Alexa explored the use of both, the streamR and twitteR packages in R. The ggplot2, maps, and wordcloud packages were used for presentation of results.

Distribution of geolocated tweets and word cloud referring to Taylor Swift (Source: Alexa Hinves)
Distribution of geolocated tweets and word cloud referring to Taylor Swift (Source: Alexa Hinves)

Alexa’s map of 1,000 Taylor Swift-related tweets suffers from a challenge that is common to many Twitter maps – they basically show population distribution rather than spatial patterns that are specific to tweet topics or general Twitter use. In this instance, we see the major cities in the United States lighting up. The corresponding word cloud (which I pasted onto the map) led Alexa to speculate that businesses can use location-specific sentiment analysis for targeted advertising, for example in the context of product releases.

The third project was an analysis and map poster on “#TOpoli – Geovisualization of Political Twitter Data in Toronto, Ontario”, completed by MSA cand. Richard Wen. With this project, we turn our interest back to the City of Toronto and to the topic of the October 2014 municipal election. Richard used similar techniques as the other two students to collect geolocated tweets, the number of which he mapped by the 140 City neighbourhoods (normalized by neighbourhood area – “bubble map” at top of poster). Richard then created separate word clouds for the six former municipalities in Toronto and mapped them within those boundaries (map at bottom of poster).

#TOpoli map poster - spatial pattern and contents of tweets in Toronto's mayoral election 2015 (Source: Richard Wen)
#TOpoli map poster – spatial pattern and contents of tweets in Toronto’s mayoral election 2015 (Source: Richard Wen)

Despite the different approach to normalization (normalization by area compared to Yishi’s normalization by population), Richard also finds a concentration of Twitter activity in downtown Toronto. The word clouds contain similar terms, notably the names of the leading candidates, now-mayor John Tory and candidate Doug Ford. An interesting challenge arose in that we cannot tell just from the word count whether tweets with a candidate’s name were written in support or opposition to this candidate.

The three MSA students used the open-ended cartography assignment to acquire expertise in a topic that is “trending” among neo-cartographers. They have already been asked for advice by a graduate student of an environmental studies program contemplating a Twitter sentiment analysis for her Master’s thesis. Richard’s project also led to an ongoing collaboration with journalism and communication researchers. However, the most valuable lesson for the students and myself was an increased awareness of the pitfalls of analyzing and mapping tweets. These pitfalls stem from the selective use of Twitter among population subgroups (e.g., young professionals; globally the English-speaking countries), the small proportion of tweets that have a location attached (less than 1% of all tweets by some accounts), and the limitations imposed by Twitter on the collection of free samples from the Twitter stream.

I have previously discussed some of these data-related issues in a post on “Big Data – Déjà Vu in Geographic Information Science”. An additional discussion of the cartography-related pitfalls of mapping tweets will be the subject of another blog post.