3D Printed Geographies – Techniques and Examples

As a follow-up to my post on “Geospatial Data Preparation for 3D Printed Geographies” (19 Sept 2015), I am providing an update on the different approaches that I have explored with my colleague Dr. Claire Oswald for our one-year RECODE grant entitled “A 3D elevation model of Toronto watersheds to promote citizen science in urban hydrology and water resources”. The tools that we have used to turn geospatial data into 3D prints include the program heightmap2stl; direct loading of a grey scale image into the Cura 3D modeling software; the QGIS plugin DEMto3D; the script shp2stl.js; and a workflow using Esri’s ArcScene for 3D extraction, saving in VRML format, and translating this file into STL format using the MeshLab software.

The starting point: GIS and heightmap2stl

Being a GIS specialist with limited knowledge of 3D graphics or computer-aided design, all of the techniques used to make geospatial data printable rely heavily on the work of others, and my understanding of the final steps of data conversion and 3D print preparation is somewhat limited. With this in mind, the first approach to convert geospatial data, specifically a digital elevation model, used Markus Fussenegger’s Java program heightmap2stl, which can be downloaded from http://www.thingiverse.com/thing:15276/#files and used according to detailed instructions on “Converting DEMs to STL files for 3D printing” by James Dittrich of the University of Oregon. The process from QGIS or ArcGIS project to greyscale map image to printable STL file was outlined in my previous post at http://gis.blog.ryerson.ca/2015/09/19/geospatial-data-preparation-for-3d-printed-geographies/.

Quicker and not dirtier: direct import into Cura

The use of the heightmap2stl program in a Windows environment requires a somewhat cumbersome process using the Windows command line and the resulting STL files seemed exceedingly large, although I did not systematically investigate this issue. I was therefore very pleased to discover accidentally that the Cura software, which I am using with my Lulzbot Taz 5 printer, is able to load greyscale images directly.

The following screenshot shows the available parameters after clicking “Load Model” and selecting an image file (e.g. PNG format, not an STL file). The parameters include the height of the model, height of a base to be created, model width and depth within the available printer hardware limits, the direction of interpreting greyscale values as height (lighter/darker is higher), and whether to smoothen the model surface.

ormdem_oakridges_east_74m-Cura-import-settings

The most ‘popular’ model created using this workflow is our regional watershed puzzle. The puzzle consists of a baseplate with a few small watersheds that drain directly into Lake Ontario along with a set of ten separately printed watersheds, which cover the jurisdiction of the Toronto and Region Conservation Authority (TRCA).

Controlling geographic scale: QGIS plugin DEMto3D

Both of the first two approaches have a significant limitation for 3D printing of geography in that they do not support controlling geographic scale. To keep track of scale and vertical exaggeration, one has to calculate these values on the basis of geographic extent, elevation differential, and model/printer parameters. This is where the neat QGIS plugin DEMto3D comes into play.

As can be seen in the following screenshot, DEMto3D allows us to determine a print extent from the current QGIS project or layer extents; set geographic scale in conjunction with the dimension of the 3D print; specify vertical exaggeration; and set the height at the base of the model to a geographic elevation. For example, the current setting of 0m would print elevations above sea level while a setting of 73m would print elevations of the Toronto region in relation to the surface level of Lake Ontario. One shortcoming of DEMto3D is that vertical exaggeration oddly is limited to a factor of 10, which we found not always sufficient to visualize regional topography.

dem2stl-settings_150k-scale_10-exagg_0-base_humber-watershed

Using DEMto3D, we recently printed our first multi-part geography, a two-piece model of the Oak Ridges Moraine that stretches over 200km in east-west direction to the north of the City of Toronto and contains the headwaters of streams running south towards Lake Ontario and north towards Lake Simcoe and the Georgian Bay. To increase the vertical exaggeration for this print from 10x to 25x, we simply rescaled the z dimension in the Cura 3D printing software after loading the STL file.

Another Shapefile converter: shp2stl

The DEMto3D plugin strictly requires true DEM data (as far as I have found so far), thus it would not convert a Shapefile with building heights for the Ryerson University campus and surrounding City of Toronto neighbourhoods, which I wanted to print. Additionally, the approach using a greyscale image of campus building heights and one of the first two approaches above also did not work, as the 3D buildings represented in the resulting STL files had triangulated walls.

In looking for a direct converter from Shapefile geometries to STL, I found Doug McCune’s shp2stl script at https://github.com/dougmccune/shp2stl and his extensive examples and explanations in a blog post on “Using shp2stl to Convert Maps to 3D Models“. This script runs within the NodeJS platform, which needs to be installed and understood – the workflow turned out to be a tad too complicated for a time-strapped Windows user. Although I managed to convert the Ryerson campus using shp2stl, I never  printed the resulting model due to another, unrelated challenge of being unable to add a base plate to the model (for my buildings to stand on!).

Getting those walls straight: ArcScene, VMRL, and Meshlab

Another surprise find, made just a few days ago, enabled the printing of my first city model from the City of Toronto’s 3D massing (building height) dataset. This approach uses a combination of Esri’s ArcScene and the MeshLab software. Within ArcScene, I could load the 3D massing Shapefile (after clipping/editing it down to an area around campus using QGIS), define vertical extrusion on the basis of the building heights (EleZ variable), and save the 3D scene in the VRML format as a *.wrl (“world”) file. Using MeshLab, the VRML file could then be imported and immediately exported in STL format for printing.

While this is the only approach included in this post that uses a commercial tool, ArcScene, it is likely that the reader can find alternative workflow based on free/open-source software to extrude Shapefile polygons and turn them into STL, whether or not this requires the intermediate step through the VRML format.

Ryerson Geographers at AAG 2016

Another year has passed, and another annual meeting of the Association of American Geographers (AAG) is about to start in San Francisco this week. The Department of Geography and Environmental Studies at Ryerson is sending its usual strong complement to AAG 2016, although the writer of these lines is sadly staying behind in a cold and rainy Toronto.

Contributions from @RyersonGeo have a traditional focus in Business Geography, with additional abstracts in the areas of urban forest, population health, migration & settlement, local food, renewable energy, and sustainability science. In approximate chronological order of presentation:

In addition to these contributions, Dr. Hernandez also serves as chair, introducer, organizer, and/or panelist of sessions on

  • BGSG Career Achievement Award: A Conversation with Ken Smith
  • Connecting Practitioners and Students – Advice on Career Development in the Field of Location Intelligence
  • Location Intelligence Trends in the Contemporary Omni-channel Retail Marketplace
  • Retail and Business Geography I & II

Dr. Millward also serves as chair of the session on “Arboriculture and Urban Forestry” and Dr. Steenberg is a panelist in the session entitled “Disrupt Geo 1: new ideas from the front lines of maps, mobile, and big data”.

We wish our colleagues and all participants a productive and enjoyable AAG 2016!

Victoria Fast and Daniel Liadsky receive Ryerson’s top award

Blog post co-authored by Victoria Fast, Daniel Liadsky, and Claus Rinner

Ryerson’’s Department of Geography and Environmental Studies is celebrating two gold medal recipients this fall. The Ryerson Gold Medals are the University’s highest honours, presented annually to one graduate of each Faculty. Victoria Fast (PhD in Environmental Applied Science and Management, supervised by Dr. Claus Rinner) received the Gold Medal for the interdisciplinary programs housed at the Yeates School of Graduate Studies, while Daniel Liadsky (MSA in Spatial Analysis, supervised by Dr. Brian Ceh) received the Gold Medal for the Faculty of Arts.

Victoria’’s PhD research investigated the potential of novel geographic information techniques to reshape the interaction of government with community organizations and citizens through crowdsourcing and collaborative mapping. The study applied a VGI systems approach (Fast & Rinner 2014) to actively engage with urban food stakeholders, including regional and municipal government, NGOs, community groups, and individual citizens to reveal and map uniquely local and community-driven food system assets in Durham Region. The Durham Food Policy Council and Climate Change Adaptation Task Force are currently using the results to support informed food policy and program development. Victoria’s research contributes to geothink.ca, a SSHRC Partnership Grant on the impact of the participatory Geoweb on government-citizen interactions.

Daniel’’s research in the Master of Spatial Analysis (MSA) examined how dietary intake is mediated by individual, social, and environmental factors. The Toronto-based study was stratified by gender and utilized self-reported data from the Canadian Community Health Survey as well as measures of the food environment derived from commercial retail databases. The results uncovered some of the complex interactions between the food environment, gender, ethnocultural background, and socioeconomic restrictions such as low income and limited mobility. In addition and as part of an unrelated investigation, Daniel undertook a feasibility study into a mapping and data analytics service for the non-profit sector.

References/resources:

 

GIS Day 2015 at Ryerson – A Showcase of Geographic Information System Research and Applications

Ryerson students, faculty, staff, and the local community are invited to explore and celebrate Geographic Information Systems (GIS) research and applications. Keynote presentations will outline the pervasive use of geospatial data analysis and mapping in business, municipal government, and environmental applications. Research posters, software demos, and course projects will further illustrate the benefits of GIS across all sectors of society.

Date: Wednesday, November 18, 2015
Time: 1:00pm-5:00pm
Location: Library Building, 4th Floor, LIB-489 (enter at 350 Victoria Street, proceed to 2nd floor, and take elevators inside the library to 4th floor)

Tentative schedule:

  • 1:00 Soft kick-off, posters & demos
  • 1:25 Welcome
  • 1:30-2:00 Dr. Namrata Shrestha, Senior Landscape Ecologist, Toronto & Region Conservation Authority
  • 2:00-2:30 posters & demos
  • 2:30-3:00 Andrew Lyszkiewicz, Program Manager, Information & Technology Division, City of Toronto
  • 3:00-3:30 posters & demos
  • 3:30-4:00 Matthew Cole, Manager, Business Geomatics, and William Davis, Cartographer and Data Analyst, The Toronto Star
  • 4:00 GIS Day cake!
  • 5:00 End

GIS Day is a global event under the motto “Discovering the World through GIS”. It takes place during National Geographic’s Geography Awareness Week, which in 2015 is themed “Explore! The Power of Maps”, and aligns with the United Nations-supported International Map Year 2015-2016.

Event co-hosted by the Department of Geography & Environmental Studies and the Geospatial Map & Data Centre. Coffee/tea and snacks provided throughout the afternoon. Contact: Dr. Claus Rinner

Geospatial Data Preparation for 3D Printed Geographies

I am collaborating with my colleague Dr. Claire Oswald on a RECODE-funded social innovation project aimed at using “A 3D elevation model of Toronto watersheds to promote citizen science in urban hydrology and water resources”. Our tweets of the first prototypes printed at the Toronto Public Library have garnered quite a bit of interest – here’s how we did it!

claire-dem-tweet_Aug21claus-popdens-tweet_Aug22

The process from geography to 3D print model includes four steps:

  1. collect geospatial data
  2. process and map the data within a geographic information system (GIS)
  3. convert the map to a 3D print format
  4. verify the resulting model in the 3D printer software

So far, we made two test prints of very different data. One is a digital elevation model (DEM) of the Don River watershed, the other represents population density by Toronto Census tracts. A DEM for Southern Ontario created by the Geological Survey of Canada was downloaded from Natural Resources Canada’s GeoGratis open data site at http://geogratis.gc.ca/. It came in a spatial resolution of 30m x 30m grid cells and a vertical accuracy of 3m.

The Don River watershed boundary from the Ontario Ministry of Natural Resources was obtained via the Ontario Council of University Libraries’ geospatial portal, as shown in the following screenshot.

Download of watershed boundary file

The population density data and Census tract boundaries from Statistics Canada were obtained via Ryerson University’s Geospatial Map and Data Centre at http://library.ryerson.ca/gmdc/ (limited to research and teaching purposes).

The Don River watershed DEM print was prepared in the ArcGIS software by clipping the DEM to the Don River watershed boundary selected from the quaternary watershed boundaries. The Don River DEM was visualized in several ways, including the “flat” greyscale map with shades stretched between actual minimum and maximum values, which is needed for conversion to 3D print format, as well as the more illustrative “hillshade” technique with semi-transparent land-use overlay (not further used in our 3D project).

DEM of Don River watershedHillshade of Don River valley at Thorncliffe Park

The population density print was prepared in the free, open-source QGIS software. A choropleth map with a greyscale symbology was created, so that the lighter shades represented the larger population density values (yes, this is against cartographic design principles but needed here). A quantile classification with seven manually rounded class breaks was used, and the first class reserved for zero population density values (Census tracts without residential population).

qgis-3D-popdens-project

In QGIS’ print composer, the map was completed with a black background, a legend, and a data source statement. The additional elements were kept in dark grey so that they would be only slightly raised over the black/lowest areas in the 3D print.

qgis-3D-popdens-composer

The key step of converting the greyscale maps from the GIS projects to 3D print-compliant STL file format was performed using a script called “heightmap2stl.jar” created by Markus Fussenegger. The script was downloaded from http://www.thingiverse.com/thing:15276/#files, and used with the help of instructions written by James Dittrich of the University of Oregon, posted at http://adv-geo-research.blogspot.ca/2013/10/converting-dems-to-stl-files-for-3d.html. Here is a sample run with zero base height and a value of 100 for the vertical extent.

Command for PNG to STL conversion

The final step of pre-print processing involves loading the STL file into the 3D printer’s proprietary software to prepare the print file and check parameters such as validity of the structure, print resolution, fill options for hollow parts, and overall print duration. At the Toronto Public Library, 3D print sessions are limited to two hours. The following screenshot shows the Don River DEM in the MakerBot Replicator 2 software, corresponding to the printer used in the Library. Note that the model shown was too large to be printed in two hours and had to be reduced below the maximum printer dimensions.

Don River watershed model in 3D printing software

The following photo by Claire Oswald shows how the MakerBot Replicator 2 in the Toronto Reference Library’s digital innovation hub prints layer upon layer of the PLA plastic filament for the DEM surface and the standard hexagonal fill of cavities.

DEM in printing process - photo by C. Oswald

The final products of our initial 3D print experiments have dimensions of approximately 10-20cm. They have made the rounds among curious-to-enthusiastic students and colleagues. We are in the process of improving model quality, developing additional models, and planning for their use in environmental education and public outreach.

The printed Don River watershed model

3D-printed Toronto population density map

Normalization vs. Standardization – Clarification (?) of Key Geospatial Data Processing Terminology using the Example of Toronto Neighbourhood Wellbeing Indicators

In geospatial data processing, the terms “normalization” and “standardization” are used interchangeably by some researchers, practitioner, and software vendors, while others are adamant about the differences in the underlying concepts.

Krista Heinrich, newly minted Master of Spatial Analysis (MSA) and a GIS Analyst at Esri Canada, wrote her MSA major research paper on the impact of variable normalization and standardization on neighbourhood wellbeing scores in Toronto. More specifically, within a SSHRC-funded research project on multi-criteria decision analysis and place-based policy-making, we examined the use of raw-count vs. normalized variables in the City of Toronto’s “Wellbeing Toronto” online tool. And, we explored options to standardize wellbeing indicators across time. Here is what Krista wrote about these issues in a draft of her paper:

In most analysis situations involving multiple data types, raw data exist in a variety of formats and measures, be it monetary value, percentages, or ordered rankings. This in turn presents a problem of comparability and leads to the requirement of standardization. While Böhringer, & Jochem (2007), emphasize that there is no finite set of rules for the standardization of variables in a composite index, Andrienko & Andrienko (2006) state that the standardization of values is a requirement.

Several standardization techniques exist including linear scale transformations, goal standardization, non-linear scale transformations, interval standardization, distance to reference, above and below the mean, z scores, percentage of annual differences, and cyclical indicators (Dorini et al, 2011; Giovanni, 2008; Nardo et al., 2005; Malczewski, 1999).  It should be noted however, that there is inconsistency among scholars as to the use of terms such as normalization and standardization.

While Giovannini (2008) and Nardo et al. (2005) categorize standardization solely as the use of z-scores, they employ the term normalization to suggest the transformation of multiple variables to a single comparable scale. Additionally, Ebert & Welsch (2004) refer to Z score standardization as the definition of standardization and place this method, along with the conversion of data to a 0 to 1 scale, referred to as ‘ranging’, as the two most prominent processes of normalization. According to Ebert & Welsch (2004), “Normalization is in most cases a linear transformation of the crude data, involving the two elementary operations of translation and expansion.” In contrast, other scholars classify the transformation of raw values to a single standardized range, often 0.0-1.0, as standardization (Young et al., 2010A; Malczewski, 1999; Voogd, 1983) while Dailey (2006), in an article for ArcUser Online, refers to the normalization of data in ArcMap as the process of standardizing a numerator against a denominator field. […]

In this paper, we employed the term standardization to define the classification of raw values into a single standardized scale and in particular, through the examination of linear scale transformations and their comparison with Z score standardization.  The term normalization is used in this paper to describe the division of variables by either area or population, as is referred to by Dailey (2006), therefore regularizing the effect that the number of individuals or the size of an area may have on the raw count values in an area. “

In other words, the way we use the two terms, and the way we think they should be used in the context of spatial multi-criteria decision analysis and area-based composite indices, standardization refers to making the values of several variables (indicators, criteria) comparable by transforming them to the same range of, e.g.,  0-to-1. In contrast, normalization refers to the division of a raw-count variable by a reference variable, to account for different sizes of enumeration areas.

Unfortunately, I have to admit that in my cartography course, following the excellent textbook by Slocum et al. (2009), I am using the term “standardization” for the important concept of accounting for unit sizes. For example, choropleth maps should only be made for standardized (i.e., normalized!) variables, never for raw-count data (a great rationale for which is provided at http://www.gsd.harvard.edu/gis/manual/normalize/).  Furthermore, high-scoring blog posts at http://www.dataminingblog.com/standardization-vs-normalization/ and http://www.benetzkorn.com/2011/11/data-normalization-and-standardization/ define normalization as the rescaling to the 0-to-1 range (our definition of standardization) and standardization as the z-score transformation of a variable. Oops, did I promise clarification of these terms ?-)

In case you are wondering about Krista’s results regarding the Wellbeing Toronto tool: It depends! She discusses an example of a variable where normalization changes the spatial patterns dramatically, while in another example, spatial patterns remain very similar between raw-count and normalized variables. Standardization was used to make wellbeing indicators from 2008 comparable to those from 2011, as we will report at the Association of American Geographers (AAG) annual meeting in April 2014. Our abstract (URL to be added when available) was co-authored by Dr. Duncan MacLellan (Ryerson, Politics and Public Admin department), my co-investigator on the above-mentioned research grant, and Kathryn Barber, a student in Ryerson’s PhD in Policy Studies program.

Awards Season

Regular readers of this blog, if they existed, would have noticed a new static “page” listing various awards, scholarships, and bursaries for students in Cartography, Geography, and GIScience. January/February and the spring seem to have clusters of deadlines for these competitions, in which we will see more Ryerson Geography students participate this year!

Today, Ryerson University officially announced the recipients of the research awards handed to faculty members, and you will find yours truly as one of two awardees from the Faculty of Arts: http://www.ryerson.ca/ryersontoday/data/news/2013/02/src_sawan_awards.html. Ryerson maintains a comprehensive approach to faculty contributions to knowledge, which is labeled as Scholarly, Research and Creative Activity (SRC). This year’s Faculty SRC Awards recognize outstanding achievements by faculty members in the 2011/12 academic year.

I would like to acknowledge my students, who continue to play a significant role in my research program, including those in our BA in Geographic Analysis and Master of Spatial Analysis (MSA) programs. For example, both peer-reviewed journal articles contributing to the above SRC Award were based on MSA students’ major research papers. As always, details on my team’s scholarship can be found on my homepage, http://www.ryerson.ca/~crinner/, and many publications are posted with full text in Ryerson’s institutional repository, http://digitalcommons.ryerson.ca/do/search/?q=author%3ARinner.

News from the Sabbatical Front

Wikipedia tells us that a sabbatical is “a rest from work”. And in our collective agreement, Ryerson University “acknowledges the importance of sabbatical leave to the intellectual vibrancy of the Faculty and therefore of the University.” Indeed, the triad of a professor’s duties in teaching, research, and administrative service is often shifted towards teaching and service, because many research tasks are more flexible to schedule than courses and committee meetings, and therefore tend to be postponed if time is scarce. In stark contrast to the introductory note, a sabbatical is NOT a year off (as some of my non-academic friends are thinking), but a year (or half-year) focused on research with no teaching and service duties.

Having half days or even full days available for writing has been a unique experience in the first two months of my sabbatical. The outcome so far: five journal articles under review, by far the most I have had “out there” simultaneously at any time in my career. Two of these are with Master of Spatial Analysis (MSA) students who completed their major research papers in August/September; one is with a former student in collaboration with Toronto Public Health; one is with a former postdoc in collaboration with the Centre for Addiction and Mental Health; and one is led by a colleague in collaboration with the Injury Prevention Research Office at St. Michael’s Hospital. In addition, I have worked on a manuscript with an MSA grad from two years ago in collaboration with a colleague in Ryerson’s School of Journalism, as well as another manuscript with a former Geographic Analysis student of mine. These are still in progress, and several more manuscripts as well as a book project are lined up for the coming months!

Perhaps the most exciting outcome of the last few weeks though is a 250-word abstract submitted tonight for the annual meeting of the Association of American Geographers (AAG) in April 2013. Together with my PhD student Victoria Fast, we are proposing an exciting new perspective on the burgeoning phenomenon of Volunteered Geographic Information (VGI). Basically, we are saying that there is no such thing as VGI! That’s because what researchers call VGI is really just user-contributed data. We argue that information cannot be volunteered; instead, it is a meaningful system output that is generated from volunteered geographic data (VGD) for the purpose of answering a question. We think that this systems perspective on VGI provides a framework for VGI research and will eventually help devise more effective geospatial Web applications.

Visual Analytics for Spatio-Temporal Data

I am starting my sabbatical year with a long overdue participation in the GIScience conference series. GIScience 2012 is taking place at Ohio State University. There was an excellent selection of pre-conference workshops today, of which I attended the one on “GeoVisual Analytics, Time to Focus on Time”, see GeoVA(t) 2012.

I presented research completed last fall by Master of Spatial Analysis student Andrew Lee under my supervision. We used a technology called “Self- Organizing Maps” to visualize changes in socio-economic status of Toronto neighbourhoods between 1996 and 2006. The presentation garnered a short but intense discussion of the limitations of the SOM technology – something to look at in future research!

Other presentations of interest introduced the “Great Wall of Space-Time”, a wall-like 3D visualization for time series data; interactive temporal zoom & pan tools using multi-touch displays; and another SOM-based cluster analysis for weather data, in which the “Multiple Temporal Unit Problem” was discussed (in analogy to geography’s well-known multiple areal unit problem). All workshop slides will be made available by the organizers at the above Web site.