In the final lab for the course, the focus was on Dasymetric mapping. Dasymetric mapping uses additional information such as land type to improve determining where populations are allocated. To help get a better idea of where the population is distributed, impervious data of roads to remove areas where people are unlikely to live.
To perform this analysis, I used the Zonal Statistics as Table tool to find the impervious areas of the census tracts. Then I joined the new table to the census tract data. Next I used the Intersect tool for the census tract and high school layers. I added a field to calculate the area of the new layer. I also used the impervious against the area to find a new area. I used the new area multiplied by the population and divided by the before area to calculate the new population.
The reference population was 54,720 and the estimated was 54,661. About 12% of the population is allocated incorrectly.
Sunday, December 4, 2016
Friday, December 2, 2016
GIS Portfolio
This is my last semester in the GIS Certificate program. Through my time during the program, my knowledge of GIS and how it can be applied has grown tremendously. Please click on the link to view my GIS Portfolio.
GIS Portfolio
Along with my portfolio is a brief interview covering my favorite map created as well as how I overcame obstacles.
Interview
Below are a few examples of my work:
GIS Portfolio
Interview
Below are a few examples of my work:
Sunday, November 27, 2016
Special Topics: Lab 14
This week's lab covered the topic of spatial data aggregation. The lab this week looked into how congressional districts are within the United States. The first part of the lab looked at how compacted the districts are throughout the country. Compactness is based on the shape of the polygon. For example, oddly shaped polygons are not considered compact.
To determine the top 10 worst offenders of least compactness, I had to calculate the area and perimeter of each polygon. The odder the shape, the more likely the longer the perimeter. As the screenshot below will show, the district has a weird shape that clearly shows the district is covering across a lot of space to get certain groups in the district.
The other aspect of gerrymandering is community. It is ideal to have the least amount of districts in each county. Having more than one district to cover a county show that certain areas of the county could be select to achieve certain results. I made sure to exclude counties with large populations which would need multiple districts and then looked at how many districts fell in a county. Below are the results of the analysis.
To determine the top 10 worst offenders of least compactness, I had to calculate the area and perimeter of each polygon. The odder the shape, the more likely the longer the perimeter. As the screenshot below will show, the district has a weird shape that clearly shows the district is covering across a lot of space to get certain groups in the district.
Compactness |
Community |
Sunday, November 20, 2016
Special Topics: Lab 13
Week 13 lab focused on how scale and resolution can affect the details of an image. The first part of the lab looked at vector data while part be focused on raster data. The first part of the lab compared polylines and polygons at different scales. The second part of the lab compared LIDAR and SRTM with cell size at 90 m. SRTM is a very high resolution images.
To compare the two, I had to change the projection of the DEM and resampled it to have the cell size of 90 m. After that process was complete, I used the Slope tool to find the average slope. I compared the slope to the LIDAR slope. I also visually compared the two images. The LIDAR image has a larger slope and the low elevations are more noticeable in the LIDAR image than SRTM.
Below are the results of the analysis I performed. The SRTM images shows less change in the elevation than the LIDAR. As seen in the average slopes, the SRTM has the lower slope.
To compare the two, I had to change the projection of the DEM and resampled it to have the cell size of 90 m. After that process was complete, I used the Slope tool to find the average slope. I compared the slope to the LIDAR slope. I also visually compared the two images. The LIDAR image has a larger slope and the low elevations are more noticeable in the LIDAR image than SRTM.
Below are the results of the analysis I performed. The SRTM images shows less change in the elevation than the LIDAR. As seen in the average slopes, the SRTM has the lower slope.
LIDAR |
SMRT |
Sunday, November 13, 2016
Special Topics: Lab 12
This week lab focused on comparing the OLS model to the GWR model. The Geographically Weighted Regression model. The model looks at small sets of data. The model uses an equation to incorporate the independent and dependent variables. The shape of the results are determined by bandwidth and kernel type.
This lab looked at housing data to determine the relationship. First the OLS model was used. Next, I used the same variables to create the GWR model. After running the models, I looked at the results to see which variable is the greatest interest. By running the analysis with the GWR, it strengthens the relationship. By taking the spatial relationship into consideration, it improves the model by using spatial aspect into play. The OLS does not look at the spatial relationship. The OLS just looks at the variables and the results don't take into account the kernel type or bandwidth.
This lab looked at housing data to determine the relationship. First the OLS model was used. Next, I used the same variables to create the GWR model. After running the models, I looked at the results to see which variable is the greatest interest. By running the analysis with the GWR, it strengthens the relationship. By taking the spatial relationship into consideration, it improves the model by using spatial aspect into play. The OLS does not look at the spatial relationship. The OLS just looks at the variables and the results don't take into account the kernel type or bandwidth.
Sunday, November 6, 2016
Special Topics: Lab 11
Week 11 of Special Topics continued the topic of statistics and regression. The topic this week taught how to find the best model performance. Using the Ordinary Least Squares tool, The tool generate results using one or multiple variables. The results state the coefficients, p values, VIF, and Jarque-Ber.
In order to determine if the selected variables are correct, more are needed, or some should be removed, The a few of the checks are if the independent variables are helping, what are the relationships, and are the variables redundant. A few other checks look at if the model is biased, if all the needed variables are used, and how well the variables explain the dependent variable.
The lab this week required using the OLS tool along with the Exploratory Regression tool. The Exploratory Regression tool produces results of whether models pass and goes through all the various options. The results also use the Adjusted R Square and Akaike's Information. These numbers can be used to determine fit and explains variation.
Sunday, October 30, 2016
Special Topics: Lab 10
This week's lab was an introduction to statistics which involved correlations and bi-variate regression. Part of the lab consisted of finding missing data for 20 years of rainfall for a rain station. To find the missing data, I used the regression tool from the Data Analysis Toolpak. Once I had the regression summary, I found the slope and intercept values. I used the values in the formula y = m*x+b. The x variable was the data I had for rain station B.
By using y = m*x + b, it assumes that whenever x is 0 y equals the slope plus the intercept. The intercept tells us how much change is in each variable while slope tell us how much the variable will go up or down. It assumes if I have x, I can figure out y. The regression analysis looks at where station A and B for the years there is data for both. It finds the relationship between the stations.
By using y = m*x + b, it assumes that whenever x is 0 y equals the slope plus the intercept. The intercept tells us how much change is in each variable while slope tell us how much the variable will go up or down. It assumes if I have x, I can figure out y. The regression analysis looks at where station A and B for the years there is data for both. It finds the relationship between the stations.
Sunday, October 23, 2016
Special Topics: Lab 9
This week's lab covered vertical accuracy of a DEM. I used data points on top of the LIDAR layer and used the Extract Values tool. Once the values were extracted, I had to convert the values from feet to meters. The next thing was to calculate the difference between the LIDAR values and the field points. Once the difference was calculated, I squared the difference and then summed it. The next step was to find the average and then take the square root to find the Root Mean Square Average.
Once the RMSE was calculated, to determine the 95th percentile accuracy, I multiplied the RMSE by 1.96. To find the 68th percentile accuracy, I multiplied the RMSE by 1.69. The lower the number, the more accurate it is. To determine if there was a bias, I had to find the Mean Error. To find that, I took the sum of the difference and then take the average. Below are the results. The results show the most accurate and that the urban area had the most bias.
Once the RMSE was calculated, to determine the 95th percentile accuracy, I multiplied the RMSE by 1.96. To find the 68th percentile accuracy, I multiplied the RMSE by 1.69. The lower the number, the more accurate it is. To determine if there was a bias, I had to find the Mean Error. To find that, I took the sum of the difference and then take the average. Below are the results. The results show the most accurate and that the urban area had the most bias.
Accuracy Results |
Sunday, October 16, 2016
Special Topics: Lab 8
This week's lab covered interpolation. The example used was for water quality of Tampa Bay. Four different techniques were used to display the Biochemical Oxygen Demand (BOD). One technique used non spatial analysis. The other technique was Thiessen interpolation. This analysis used the Create Thiessen polygon tool. The input features are the BOD points and the output is all fields. A mask needed to be applied to only show the Thiessen polygon for Tampa Bay. This used the points in each polygon to determine the whole value.
Technique three used Inverse Distance Weighting interpolation. The IDW tool is used to perform this and I had to adjust the radius and power. The last technique was spline. The Spline tool was used for both regularized spline and tension. Since there were points very close to each other, it caused high concentrations. The points needed to be removed or the average to be calculated.
The results of the analysis show fairly similar for each technique except the regularized spline. Below is an example of the tension spline.
Technique three used Inverse Distance Weighting interpolation. The IDW tool is used to perform this and I had to adjust the radius and power. The last technique was spline. The Spline tool was used for both regularized spline and tension. Since there were points very close to each other, it caused high concentrations. The points needed to be removed or the average to be calculated.
The results of the analysis show fairly similar for each technique except the regularized spline. Below is an example of the tension spline.
Tension Spline |
Sunday, October 9, 2016
Special Topics: Lab 7
The lab for this week compared TIN models with DEM models. The TIN model shows the terrain and lake with fairly rigid lines. The DEM model shows the changes in elevation with far smoother contour lines. The TIN didn't really show the terrain well, so I had to adjust the TIN by using the Edit TIN tool.
It is interesting that DEMs and TINs can be created from elevation points. DEM models are represented by rasters while TINs are represented by vectors. Since DEMs use pixels, the the changes in elevation seem smooth. Creating a DEM with slope, aspect, and elevation was also different than using triangulated elevation points.
It is interesting that DEMs and TINs can be created from elevation points. DEM models are represented by rasters while TINs are represented by vectors. Since DEMs use pixels, the the changes in elevation seem smooth. Creating a DEM with slope, aspect, and elevation was also different than using triangulated elevation points.
Modified TIN |
Sunday, October 2, 2016
Special Topics: Lab 6
This week's lab covered location allocation. To perform this analysis, I added the distribution centers as the locations. I added the customers as demand points. I added the analysis settings and used an output of straight lines. There was no impedance cutoff and it used all of the facilities. After solving, not all of the customers seemed to be assigned the closest center. Then I reassigned the market areas by performing a spatial join of market areas and the customers. I performed table joins to figure out how many customers go to which facility. The Summary Statistics tool which counted the customers in market areas.
Next I created a new feature class by joining the unassigned market area and new table. Now the market areas are reassigned.
The weakness is that the allocation did not always have distribution centers going to the closest customers. The strength is the ability to analyze all of the settings and inputs very quickly to show market areas.
Next I created a new feature class by joining the unassigned market area and new table. Now the market areas are reassigned.
The weakness is that the allocation did not always have distribution centers going to the closest customers. The strength is the ability to analyze all of the settings and inputs very quickly to show market areas.
Location Allocation |
Sunday, September 25, 2016
Special Topics: Lab 5
Lab 5 covers Vehicle Routing Problem and how adjusting setting and routes, it impacts how many stops and routes will be produced. To start the analysis, I added the customer information for the orders. I adjusted the parameters for pick up times and other parameters. Next I added the distribution center for the depot. I set the parameters for when the depot can be stopped at.
Next I added the routes I loaded the truck information. For the route properties, I adjusted when the route can start, the cost per mile and cost per minute, assignment rules, and the maximum capacity. I added route zones and adjusted the parameter to True so routes stay to the correct zone. U-turns were also not allowed for the routes.
Finally, I ran the Solver and 6 orders were not reached. To fix this, I adjusted the properties for Truck 15 and 16 to "include" the assignment rule. Previously, they were set to exclude which meant forced the routes to only pick up orders assigned to that truck.
After adding the new routes, below is a screenshot of the new routes which services all orders. Only one exceeds the time limit. Customer service will increase since every order is taken care of and only one time violation.
Next I added the routes I loaded the truck information. For the route properties, I adjusted when the route can start, the cost per mile and cost per minute, assignment rules, and the maximum capacity. I added route zones and adjusted the parameter to True so routes stay to the correct zone. U-turns were also not allowed for the routes.
Finally, I ran the Solver and 6 orders were not reached. To fix this, I adjusted the properties for Truck 15 and 16 to "include" the assignment rule. Previously, they were set to exclude which meant forced the routes to only pick up orders assigned to that truck.
After adding the new routes, below is a screenshot of the new routes which services all orders. Only one exceeds the time limit. Customer service will increase since every order is taken care of and only one time violation.
Improved Routes |
Sunday, September 18, 2016
Special Topics: Lab 4
This week's lab looked at create new network datasets and adjusting the analysis settings. I created a network dataset that used streets as the participating feature class. I selected to model turns, but did not use restricted turns. I used elevation fields, but did not select to use traffic modeling. Once all the settings were adjusted, I built the network dataset.
In ArcMap, I added the recently created network dataset. I enabled the network analysis and created a new route. To create the route, I had to load the facilities for the stops. The impedance was set to minutes and the stops were able to be reordered to find the fastest route. However, the first and last stop had to remain the same. The only restriction was one ways.
Next, I added the restricted turns to the Turn settings and rebuilt the network dataset. In ArcMap, I added the streets and restricted turns layers. I used the same network analysis setting as previously, and resolved the route. By adding restricted turns, the route has to adjust slightly to find a new route.
The final route analysis required to build a new network analysis. This time, I used traffic modeling. I made sure to adjust all of the traffic settings. The traffic data looked at speeds, and level of traffic. It contains the free-flow speeds. The network analysis settings were the same was the other two routes. With adding traffic information the route is adjusted to traffic speeds. Previously, that information was not included in the travel time.
In ArcMap, I added the recently created network dataset. I enabled the network analysis and created a new route. To create the route, I had to load the facilities for the stops. The impedance was set to minutes and the stops were able to be reordered to find the fastest route. However, the first and last stop had to remain the same. The only restriction was one ways.
Next, I added the restricted turns to the Turn settings and rebuilt the network dataset. In ArcMap, I added the streets and restricted turns layers. I used the same network analysis setting as previously, and resolved the route. By adding restricted turns, the route has to adjust slightly to find a new route.
Restricted Turns |
Traffic Data |
Sunday, September 11, 2016
Special Topics: Lab 3
This week's lab focused on analyzing the completeness of road networks. I compared how complete the street centerlines and TIGER roads are for Jackson County. Below is how I performed this analysis.
I needed to calculate the distance of the roads for both shapefiles. I did this by adding a field and used Calculate Geometry. This calculated the distance of each road segment in kilometers. Next, I needed to find the roads that fall within the Grid shapefile. To do this, I used the intersect tool to create a new shapefile that had all the road segments that intersect with the grids. I used the tool on both the street centerlines and TIGER roads.
Once I had the new shapefiles of roads only within the grids, I had to recalculate the distance of the roads. I used the calculate geometry tool again. I then exported each attribute table of the new distance. I found the difference between the length of road and also found the percentage. Below is a table.
I needed to calculate the distance of the roads for both shapefiles. I did this by adding a field and used Calculate Geometry. This calculated the distance of each road segment in kilometers. Next, I needed to find the roads that fall within the Grid shapefile. To do this, I used the intersect tool to create a new shapefile that had all the road segments that intersect with the grids. I used the tool on both the street centerlines and TIGER roads.
Once I had the new shapefiles of roads only within the grids, I had to recalculate the distance of the roads. I used the calculate geometry tool again. I then exported each attribute table of the new distance. I found the difference between the length of road and also found the percentage. Below is a table.
The map below shows the absolute difference between the roads completeness.
Road Completeness Analysis |
Sunday, September 4, 2016
Special Topics: Lab 2
Lab 2 covered the National Standard for Spatial Data Accuracy. The lab looked at junction points for two data sets. To perform this lab, I needed to create two new Data set Networks for the street shapefiles. I selected over 85 points of the junction shapefile for the city and over 85 for the USA street shapefile.
After selecting the points, I exported the points into new shapefiles. I then added orthophotos to see where the true intersections are located. Zooming into each pair a test points, I found the true location and added a reference point. I did that for all points. A unique identifier was applied to all three data sets. I used the Add X Y Coordinates to the attribute tables.
I exported the attribute tables for the three shape files and inserted the data into the Horizontal Accuracy spreadsheet. This spreadsheet found the difference between the X and Y coordinates. The differences were squared and summed. The Root Square Mean Error was also calculated and multiplied by 1.7308 for the standard error. This was done for the city data points and the street data points.
City:
Positional Accuracy: Using the National Standard for Spatial Data Accuracy, tested 30.64 feet horizontal accuracy at 95% confidence level.
USA Street:
Positional Accuracy: Using the National Standard for Spatial Data Accuracy, test 62.67 feet horizontal accuracy at 95% confidence level.
After selecting the points, I exported the points into new shapefiles. I then added orthophotos to see where the true intersections are located. Zooming into each pair a test points, I found the true location and added a reference point. I did that for all points. A unique identifier was applied to all three data sets. I used the Add X Y Coordinates to the attribute tables.
I exported the attribute tables for the three shape files and inserted the data into the Horizontal Accuracy spreadsheet. This spreadsheet found the difference between the X and Y coordinates. The differences were squared and summed. The Root Square Mean Error was also calculated and multiplied by 1.7308 for the standard error. This was done for the city data points and the street data points.
City:
Positional Accuracy: Using the National Standard for Spatial Data Accuracy, tested 30.64 feet horizontal accuracy at 95% confidence level.
USA Street:
Positional Accuracy: Using the National Standard for Spatial Data Accuracy, test 62.67 feet horizontal accuracy at 95% confidence level.
Test Points |
Sunday, August 28, 2016
Special Topics Lab 1
This week's lab covers the precision and accuracy. The first part looked at the precision and accuracy of a GPS unit. A point was observed and mapped 50 times using the GPS devise. To determine how accurate or precise the observations were, the average X and Y coordinates were found and a point added. I performed a buffer around the average point for 1, 2, and 5 meters.
I did a spatial join with the average point and the 50 observation points to find the distance of each point to the average point. I then found the distance for 50%, 68%, and 95% of the points. I also then determined the average elevation and found the absolute difference of the average point to the observed points. Below are the results, along with a map showing the average point and the observed points.
The reference point of the actual location was added to the map. To see how accurate the average point was, I measured the distance from the reference point to the average point. I did the same for elevation as well. This was done to determine how accurate the average point is. The average point was within 3 meters of the reference point and the elevation was within 6 meters.
The reference point and the average point differ by quite a bit. The longitude and latitude is off by 3.8 meters from the reference point to the average. The horizontal precision was 4.4 so it was greater than the true difference. The elevation is off by 6 meters while the precision showed 3 meters. Even though 4 meters is not a lot, depending on the need of knowing this location, it can be huge. GPS units can only be so accurate and the unit puts the point fairly close to the true position.
The horizontal accuracy was 3.8 meters. This is better than the horizontal precision. The vertical accuracy is 6 meters which is worse than the vertical precision of 3 meters. There was no evidence of bias in the results.
The second part of the lab covered calculating the Root Mean Square Error, mean, median, the percentiles, min and max values. I then used the XY errors to plot a CDF chart. I compared the chart to the metrics I calculated. From looking at the chart, it is clear to see certain metrics like the percentiles or the min and maximum number.
I did a spatial join with the average point and the 50 observation points to find the distance of each point to the average point. I then found the distance for 50%, 68%, and 95% of the points. I also then determined the average elevation and found the absolute difference of the average point to the observed points. Below are the results, along with a map showing the average point and the observed points.
The reference point of the actual location was added to the map. To see how accurate the average point was, I measured the distance from the reference point to the average point. I did the same for elevation as well. This was done to determine how accurate the average point is. The average point was within 3 meters of the reference point and the elevation was within 6 meters.
The reference point and the average point differ by quite a bit. The longitude and latitude is off by 3.8 meters from the reference point to the average. The horizontal precision was 4.4 so it was greater than the true difference. The elevation is off by 6 meters while the precision showed 3 meters. Even though 4 meters is not a lot, depending on the need of knowing this location, it can be huge. GPS units can only be so accurate and the unit puts the point fairly close to the true position.
The horizontal accuracy was 3.8 meters. This is better than the horizontal precision. The vertical accuracy is 6 meters which is worse than the vertical precision of 3 meters. There was no evidence of bias in the results.
Monday, August 1, 2016
Programming: Module 11
This weeks lab covered how to share tools. First I looked into the script that was provided for the lab. I then looked at the properties of the script tool. I adjusted the script itself to use sys.argv[] to use for the file path. This allowed for me to use the parameter number in the properties setting.
Before sharing the tool, I updated the item description for the tool. I filled in dialog for all the parameters. I then right clicked on the script tool and imported the script. I then right clicked and selected Password. This allows for the tool to be shared.
This class definitely took me out of my comfort zone. Python was very new to me and I was very eager to learn. I am a very literal person and figuring out how to adjust SearchCursor and for loops was very challenging for me. However, figuring out each assignment gave me more satisfaction when I was able to figure out the assignment.
Module 7 required the use of Search Cursor, Update Cursor, and Insert Cursor. This lab seemed to be the most difficult. Learning how to use each cursor and adjust it for the lab assignment was a difficult task. However, after figuring it out, I was better able to see how powerful script writing can be. It made me more determined to really work to understand what each code is doing.
Being able to use code to access attribute tables and create dictionaries is something I can apply to my job. I work with a lot of data and being able to grab certain data quickly or update it using scripts would be great.
Before sharing the tool, I updated the item description for the tool. I filled in dialog for all the parameters. I then right clicked on the script tool and imported the script. I then right clicked and selected Password. This allows for the tool to be shared.
Parameters |
Map Results |
Dialog Box |
This class definitely took me out of my comfort zone. Python was very new to me and I was very eager to learn. I am a very literal person and figuring out how to adjust SearchCursor and for loops was very challenging for me. However, figuring out each assignment gave me more satisfaction when I was able to figure out the assignment.
Module 7 required the use of Search Cursor, Update Cursor, and Insert Cursor. This lab seemed to be the most difficult. Learning how to use each cursor and adjust it for the lab assignment was a difficult task. However, after figuring it out, I was better able to see how powerful script writing can be. It made me more determined to really work to understand what each code is doing.
Being able to use code to access attribute tables and create dictionaries is something I can apply to my job. I work with a lot of data and being able to grab certain data quickly or update it using scripts would be great.
Tuesday, July 26, 2016
Programming: Module 10
This week's lab required me to create a toolbox and script to share for another user. Standalone scripts are great tools, but creating a script tool has even more benefits. Script tools are easy to share, a user doesn't need to know how to use Python, and it includes a dialog box.
The screenshot below shows the results of the script tool window that is created through this lab. To do this, I added a toolbox to my Module 10 folder. I then added a script to the tool box and made sure "Store relative path names" was checked. I selected an already created script foe the Script File. Next, I added four parameters to the script tool. I adjusted the data type and properties and set the input and output file location. When I open the tool, the window below opens.
The next step in the lab was to adjust the parameters in the standalone script. I replaced the filenames and file paths with arcpy.GetParameter(). The parameters correspond to the order I added int he script properties. In order to run the script without an error, I had to add str() to the output folder. Then I ran the tool with the clip boundary of Durango.shp and selected the four input features. To print statements in the dialog box, I had to adjust the standalone script again. I changed the print command to arcpy.AddMessage(). Results are below.
To share the script, I compressed the toolbox and the standalone script.
The screenshot below shows the results of the script tool window that is created through this lab. To do this, I added a toolbox to my Module 10 folder. I then added a script to the tool box and made sure "Store relative path names" was checked. I selected an already created script foe the Script File. Next, I added four parameters to the script tool. I adjusted the data type and properties and set the input and output file location. When I open the tool, the window below opens.
Tool Options |
Dialog Box |
Flowchart |
Wednesday, July 20, 2016
Programming: Module 9
Raster Result |
degrees and aspect of 150 to 270 degrees.
I imported arcpy, and sent the environment. I also imported tools from ArcPy Spatial Analyst. The script checks out the spatial extension and makes sure its available. I also made sure to have the overwrite function enabled. The script reclassified the forest land cover so it would have a value of 1. Next the script found the slope of the elevation shapefile.
Then the aspect tool is used to find the aspect of the elevation shapefile. The script finds the cells with a slope less than 20 and greater than 5. The script also found the cells with an aspect of 150 to 270 degrees.
Finally the script combines the slope and aspect requirements. If spatial analyst was not available, the script would print the license is not available.
Below is a flow chart of the script and also a screenshot of the result of the script.
Lab 8: Applications in GIS
This week's lab focus on damage
assessment. The lab mainly focused on Hurricane Sandy. The map that was created at the end of the lab shows the path the hurricane took
and what category of hurricane it was at that point in time. To
create this map, I added the world and US shapefiles to ArcMap. I
needed to select only the states affected by the hurricane, so I used
a select by attribute to select the states. I added the hurricane
points to the map and added XY data. To create a path of the
hurricane, I used the point to line tool.
Next, I needed to adjust the hurricane
symbology to look like a hurricane. To do this I had to edit the
symbol's properties. I had to change the symbols to ESRI
Meteorological. I found the symbol that looked like a hurricane, but
then used the angle setting to tilt it. I also added a center dot on
top of the symbol as well. I also changed the color of the symbol to
red. I saved the new symbol and category under unique values.
The next step was to add graticules to
the map. I selected the data frames properties and went to Grid.
From there, I selected the graticule that uses meridian and
parallels. Finally, just had to add the key map elements.
To perform a damage assessment on the
New Jersey shoreline, I added a new feature class to place a point on
each parcel. After placing a point on each parcel, I updated the
points attributes. I did this for every parcel in the layer. To
determine how many structures fell within 100, 200, or 300 meters of
the coastline, I used the select by location tool. I also did create
a buffer for each to determine how many fell within the distance from
the coastline. To create the coastline I created a new feature class
of a polyline that was parallel to the parcel area.
Below is the table of the result:
Structural Damage |
Counts of structures within distance categories
|
||
0-100 M | 100-200M | 200-300M | |
No Damage | 0 | 0 | 1 |
Affected | 0 | 0 | 7 |
Minor Damage | 0 | 14 | 24 |
Major Damage | 3 | 19 | 11 |
Destroyed | 9 | 10 | 7 |
Total | 12 | 43 | 50 |
Wednesday, July 13, 2016
Programming: Module 8
Module 8 lab covered working with geometry objects and multipart features. The script I wrote creates a text file and writes coordinates along with Object IDs for the vertices in the rivers shapefile. To do this, I had to import the environment and shapefile. I used a Search Cursor to get the OID, Shape geometry object, and the Name field.
I had to use the Open function to create the text file and name it as rivers_Mfelde. In order to get the coordinates, Name, and Object IDs to the text file, I had to create a for loop to iterate over all the rows. Next, I needed to do another for loop to iterate each point in the array. I use the .getPart() method to access the points in the array. I had to add 1 to the vertex ID to help keep track of the vertices.
To get the data in the text file, I had to use the .write() method. I used the iterate of the first for loop, the vertex ID, X coordinate, Y coordinate, and then the name of the river. To ensure that I captured everything correctly, I printed the results as well in the interactive window. I closed the text file.
Below is the screenshot of my text file along with a flow chart that shows the process of creating the script.
I had to use the Open function to create the text file and name it as rivers_Mfelde. In order to get the coordinates, Name, and Object IDs to the text file, I had to create a for loop to iterate over all the rows. Next, I needed to do another for loop to iterate each point in the array. I use the .getPart() method to access the points in the array. I had to add 1 to the vertex ID to help keep track of the vertices.
To get the data in the text file, I had to use the .write() method. I used the iterate of the first for loop, the vertex ID, X coordinate, Y coordinate, and then the name of the river. To ensure that I captured everything correctly, I printed the results as well in the interactive window. I closed the text file.
Below is the screenshot of my text file along with a flow chart that shows the process of creating the script.
Text File Results |
Lab 7: Applications in GIS
This week's lab focused on coastal flooding. With concerns of sea level rising during storms and due to global warming, coastal flooding analysis is important to decision makers. The map below shows the effects of flooding on Honolulu, HI.
To perform the analysis, I had to create the flood zone by using the Less Than tool to find areas less than 1.41 and 2.33 meters. To find the area of the flood zones, I converted the rasters to polygons and used the field geometry tool to calculate the geometry.
I used the Multiply tool to multiply the DEM with the flood zones. Then to get the Minus tool to get the flood depth. Next I added the tract shapefile. I added a field tool to calculate the area in square kilometers. I also calculated the population density by taking the population divided by the area.
People over 65 are less likely to be affected by the floods compared to the other types of population. In the 3 foot scenario, the white population make up the greatest percent of being affected by flooding. In the 6 foot scenario, home owners are most likely to be affected by flooding. In either scenario, people 65 and older are less likely to be affected. Home owners do have a high social vulnerability of the three groups here. In the 3 foot scenario, it is the second highest percentage.
To perform the analysis, I had to create the flood zone by using the Less Than tool to find areas less than 1.41 and 2.33 meters. To find the area of the flood zones, I converted the rasters to polygons and used the field geometry tool to calculate the geometry.
I used the Multiply tool to multiply the DEM with the flood zones. Then to get the Minus tool to get the flood depth. Next I added the tract shapefile. I added a field tool to calculate the area in square kilometers. I also calculated the population density by taking the population divided by the area.
People over 65 are less likely to be affected by the floods compared to the other types of population. In the 3 foot scenario, the white population make up the greatest percent of being affected by flooding. In the 6 foot scenario, home owners are most likely to be affected by flooding. In either scenario, people 65 and older are less likely to be affected. Home owners do have a high social vulnerability of the three groups here. In the 3 foot scenario, it is the second highest percentage.
Flood Depth Analysis |
Monday, July 11, 2016
Peer Review #2
GIS Modeling of Intertidal Wetland Exposure Characteristics discusses the analysis of solar radiation and tidal inundation impacts on coastal ecosystems. This analysis addressed whether solar radiation and atmospheric exposure can be modeled using LIDAR derived DEM data with wetland mapping. The authors stated previous methods had limitations due to data quality. Once the data was modeled, it would provide exposure characteristics of Nova Scotia, Canada. Four methods of analysis were used, two using Python scripts.
Early on in the article, the authors of this article, Crowell, Webster, and O'Driscoll discuss the limitations of analyzing solar radiation and tidal inundation. Poor data samples/quality makes the analysis difficult, and the authors do a fair job at showing how GIS analysis along with Python can simplify and improve the desired analysis which they stated extends the localized findings.
When the authors explained how they performed the tidal inundation model analysis, it was clearly explained that it uses a predictive approach. It used a script that found the high risk areas of flood damage. Since the authors clearly stated the use of cell elevation in the LIDAR DEM raster to find the connectivity between adjacent cells. One limitation is the authors do not explain how this improves upon previous analysis. It would also have been good to know why the authors did not account for preservation of momentum or flow rate. One strong argument the authors made for a benefit of using script was the script allowed for a realistic modeling of the tides.
The authors do a good job showing how one script was able to be used alongside another script. The tidal inundation model was used with the solar exposure model. However, it was not clear what the parameters were for performing this script. The data used was from 2009. The authors did not make it clear if using older data would have an impact on the analysis. The authors could have gone into more detail of how the two scripts worked together or how the analysis was performed together.
The article does a good job explaining how the coastal wetland zone script looped through each tidal model delineation to determine the spatial overlap. The authors do a great job stating how the script looked at the lowest and highest elevation and needed to use annual atmospheric and solar-exposure characteristics.
One strength of the article is the authors explained how the script can be applied to other parameters/characteristics such as other chemicals that can impact the areas. The authors did a great job supporting this claim by giving an example of other contaminants. Another strength of the article was mentioning some limitations of the analysis. By stating irregular tides were not captured in the script, it allows the audience to make note and understand why scripts are not perfect even though being fairly accurate.
Overall, this article does a decent job showing why Python scripting can benefit the environmental analysis of solar exposure and tidal patterns. The authors made a great point of how using models to fill in gaps allows to expand upon these kind of findings. However, the authors could have provided more details of how the scripts worked or explained how the analysis would be performed if done manually. It was not completely clear of the data parameters used However, after reading the article, it is clear that using scripting alongside other analysis methods reduces analysis time and allows for far more complex analysis.
Crowell, N., O’Driscoll, N.J., & Webster, T. (2011). GIS Modelling of Intertidal Wetland Exposure Characteristics. Coastal Education & Research Foundation, 44-51.
Wednesday, June 29, 2016
Lab 6: Applications in GIS
This week's lab focused on crime analysis. The three types of to determine crime hotspots in Albuquerque was Grid-based thematic mapping, Kernel Density, and Local Moran's I. After performing all three analysis, the next step was to compare which hotspot analysis is best for predicting future crime.
To create the map below, I first performed the grid-based thematic mapping. To find the hotspots by using grid-based thematic mapping, I did a spatial join of the grids with the 2007 burglaries. I selected the grids that contains at least 1 burglary and made it into a new shapefile. I then found the top 20%. I then dissolved the polygons to make one single polygon. I then added a field to calculate the square kilometers.
Next I used Kernel Density to determine the hotspots. I set the environment to only show the grids. I then used Kernel Density tool to calculate. The parameters for the tool used output cell size of 100 and the search radius of 1320 feet. I kept the area units to square miles. Next I removed the areas with 0 density. I found the mean and used that to determine the classifications. Once that was complete, I converted the raster to polygons.
Finally, I used Local Moran’s I to determine hotspots. I did a spatial join of block groups and 2007 burglaries. I then found the crime rate of burglaries to housing units. Next I used the Cluster and Outlier Analysis script and left the parameters to the defaults. Next I used a query to create a shapefile of just the HH polygons. I dissolved the polygons and then found the area by using calculate geometry.
Below is a map layout of all three analysis. This helps the Albuquerque police determine where to patrol more by comparing the 2007 hotspots to the 2008 burglaries.
To create the map below, I first performed the grid-based thematic mapping. To find the hotspots by using grid-based thematic mapping, I did a spatial join of the grids with the 2007 burglaries. I selected the grids that contains at least 1 burglary and made it into a new shapefile. I then found the top 20%. I then dissolved the polygons to make one single polygon. I then added a field to calculate the square kilometers.
Next I used Kernel Density to determine the hotspots. I set the environment to only show the grids. I then used Kernel Density tool to calculate. The parameters for the tool used output cell size of 100 and the search radius of 1320 feet. I kept the area units to square miles. Next I removed the areas with 0 density. I found the mean and used that to determine the classifications. Once that was complete, I converted the raster to polygons.
Finally, I used Local Moran’s I to determine hotspots. I did a spatial join of block groups and 2007 burglaries. I then found the crime rate of burglaries to housing units. Next I used the Cluster and Outlier Analysis script and left the parameters to the defaults. Next I used a query to create a shapefile of just the HH polygons. I dissolved the polygons and then found the area by using calculate geometry.
Below is a map layout of all three analysis. This helps the Albuquerque police determine where to patrol more by comparing the 2007 hotspots to the 2008 burglaries.
Hotspot Analysis |
Programming: Module 7
This week's lab covered checking for data, and working with lists and dictionaries. Another large component of the lab involved Search Cursors, Update Cursors, and Insert Cursors. The end result of the lab took data from a shapefile, with specific criteria and updated it in a dictionary.
Below are the results of the scripts. The results show each step and if the process was complete. The dictionary is over the cities that have Features of county seat. By using the update tool, I updated the dictionary with keys and values. The keys were the names of the cities and the values were the population from 2000.
Figuring out how to have the dictionary updated with the keys and values was the most challenging part. I was not quite sure how to manipulate the Update Cursor so it would put the city name and the population. After seeing an example seeing how each column was iterated as a row, I was able to finish the script.
Below are the results of the scripts. The results show each step and if the process was complete. The dictionary is over the cities that have Features of county seat. By using the update tool, I updated the dictionary with keys and values. The keys were the names of the cities and the values were the population from 2000.
Figuring out how to have the dictionary updated with the keys and values was the most challenging part. I was not quite sure how to manipulate the Update Cursor so it would put the city name and the population. After seeing an example seeing how each column was iterated as a row, I was able to finish the script.
Wednesday, June 22, 2016
Programming: Module 6
This week's lab focused on using ArcPy functions to perform geoprocessing. Part of the lab covered writing script in ArcMap and then using PthyonWin.
The screenshot below shows the results of the script I created. The goal of the script was to create a buffer around the hospitals and dissolve the buffer lines into larger polygon. In order to do this, I need to import ArcPy and then the environments. I set the environments and then I was able to start using the geoprocessing tools.
The first one was to add XY coordinates to the hospital shapefile. I then printed the results of the tool to ensure that it worked. Next I used the buffer tool to create a 1000 meter buffer around the hospitals. I left all the optional parameters blank and had the results printed. Finally, I used the dissolve tool to join polygons that overlapped. Once the dissolve tool was done calculating, I printed the results of the tool.
Below is a flowchart to show how the script runs.
The screenshot below shows the results of the script I created. The goal of the script was to create a buffer around the hospitals and dissolve the buffer lines into larger polygon. In order to do this, I need to import ArcPy and then the environments. I set the environments and then I was able to start using the geoprocessing tools.
The first one was to add XY coordinates to the hospital shapefile. I then printed the results of the tool to ensure that it worked. Next I used the buffer tool to create a 1000 meter buffer around the hospitals. I left all the optional parameters blank and had the results printed. Finally, I used the dissolve tool to join polygons that overlapped. Once the dissolve tool was done calculating, I printed the results of the tool.
Results of Geoprocessing Script |
Below is a flowchart to show how the script runs.
Script Flowchart |
Lab 5: Applications in GIS
This week's lab covered spatial accessibility modeling. To perform the analysis, the network analysis extension needed to be enabled. A few of the tools within network analysis that were focused on this week was Closest Facility, New Service Area, and Spatial Joins.
To create the map below, I used the New Service Area option from the network analysis toolbar. I found the service area for each of the 7 college campuses. The tolerance was 5000 meters and it used breaks of 5, 10, and 15 minutes. After clicking the Solve button, I was able to see the results of the service areas.
Next, I created a new shapefile that excluded Cypress Creek campus. I reran the New Service Area for the 6 college campuses. I used the same setting as I did for find the service areas for the 7 campuses. I converted the block group shapefile to centroids by using the Feature to Point tool. By doing this, I was able to see how the service areas overlay with the block groups.
The results show how the service area decreases if the Cypress Creek campus closes.
To create the map below, I used the New Service Area option from the network analysis toolbar. I found the service area for each of the 7 college campuses. The tolerance was 5000 meters and it used breaks of 5, 10, and 15 minutes. After clicking the Solve button, I was able to see the results of the service areas.
Next, I created a new shapefile that excluded Cypress Creek campus. I reran the New Service Area for the 6 college campuses. I used the same setting as I did for find the service areas for the 7 campuses. I converted the block group shapefile to centroids by using the Feature to Point tool. By doing this, I was able to see how the service areas overlay with the block groups.
The results show how the service area decreases if the Cypress Creek campus closes.
Campus Service Areas |
Tuesday, June 14, 2016
Lab 4: Applications in GIS
This weeks lab covered visibility analysis. Visibility analysis can be applied to a lot of different situations such as observation points or fire towers. This weeks lab focused on viewshed analysis, and observation point tools. Part of the lab covered visibility analysis using 3D Analyst and the LAS Dataset tool.
The first part of the lab used the Viewshed tool with the summit points and the elevation raster. I also used the Observer Point Tool with the same inputs. Once I had the outputs, I used Extract Values to Points to determine which summit is viewed by the most observation points.
The next part of the lab used polylines to determine which areas of Yellowstone National Park are visible from the roads. The inputs used the roads polyline shapefile and the elevation raster.
Part three of the analysis uses the 3D analyst extension. This allowed for me to see a 3D model of city of Boston. Once the streetview was selected, I was able to rotate the model to see all angles. Next, I used the LAS Dataset to Raster tool. This creates a new finish line raster. I added the camera shapefile and used the viewshed tool to see how much area is visible by the one camera. I then adjusted the offset so the camera was considered elevated and could see around the buildings.
It was important to then determine the start and end angle. This allowed for a more realistic visible area. I added two more cameras and performed the same analysis. After I had the viewshed, I adjusted the symbology to show what area is seen by 1 to 3 cameras.
The last part of the lab covered line of sight analysis. For this analysis, I needed the Create Line of Sight tool. I created a line that connect two summits. To see more details, I opened the Profile Graph. The blue dot shows an obstruction. I also used the Construct Sight Lines tool to create lines between all the towers. This allowed to see which summits are visible from each summit.
The first part of the lab used the Viewshed tool with the summit points and the elevation raster. I also used the Observer Point Tool with the same inputs. Once I had the outputs, I used Extract Values to Points to determine which summit is viewed by the most observation points.
Polyline Visibility Analysis |
Part three of the analysis uses the 3D analyst extension. This allowed for me to see a 3D model of city of Boston. Once the streetview was selected, I was able to rotate the model to see all angles. Next, I used the LAS Dataset to Raster tool. This creates a new finish line raster. I added the camera shapefile and used the viewshed tool to see how much area is visible by the one camera. I then adjusted the offset so the camera was considered elevated and could see around the buildings.
It was important to then determine the start and end angle. This allowed for a more realistic visible area. I added two more cameras and performed the same analysis. After I had the viewshed, I adjusted the symbology to show what area is seen by 1 to 3 cameras.
The last part of the lab covered line of sight analysis. For this analysis, I needed the Create Line of Sight tool. I created a line that connect two summits. To see more details, I opened the Profile Graph. The blue dot shows an obstruction. I also used the Construct Sight Lines tool to create lines between all the towers. This allowed to see which summits are visible from each summit.
Programming: Module 5
This weeks lab focused on creating a model and exporting it into a script. The model was created in ArcMap using shapefiles and geoprocessing tools.
When models are exported into a script, not all of the data sources are populated. It is important to adjust the parameters and it is important to connect the data to the correct folders. To create the model used in the script, I added a new model to the toolbox. From there, I was able to drag the soil and basin shapefiles into the model. I wanted to clip the soil layer with the basin later and dragged the clip tool into the model. To erase soils that aren't ideal for farming, I added the select tool to the model. Once the area was selected, I was able to added the erase tool.
During the whole model creating process, I had to make sure that all the parameters were set. Another key piece to creating the model was enabling outputs to be overwritten. The model is exported as a script. Scripts can also be added to the toolbox to be ran in ArcMap.
When models are exported into a script, not all of the data sources are populated. It is important to adjust the parameters and it is important to connect the data to the correct folders. To create the model used in the script, I added a new model to the toolbox. From there, I was able to drag the soil and basin shapefiles into the model. I wanted to clip the soil layer with the basin later and dragged the clip tool into the model. To erase soils that aren't ideal for farming, I added the select tool to the model. Once the area was selected, I was able to added the erase tool.
During the whole model creating process, I had to make sure that all the parameters were set. Another key piece to creating the model was enabling outputs to be overwritten. The model is exported as a script. Scripts can also be added to the toolbox to be ran in ArcMap.
Shapefile Output of Ideal Soils |
Below is a flowchart that represents the process of the script that was created to clip and erase certain soils from the basin layer.
Wednesday, June 8, 2016
Programming: Module 4
This week's lab focused on debugging scripts in PyhtonWin. In the first script, one of the ways to determine the errors was to hit the check button to see if there are any syntax errors. If there are no syntax errors, then I ran the script. This shows an error and what line the error is on. After fixing the error, I ran it again. If it resulted in an error, I looked to see the type and on what line the error fell on. The result of the error free script prints the fields in the park shapefile.
The second script contained 8 errors. To find the 8 errors, I used the Check button. The syntax was fine so I had to run the script to see what error occurred. Each time I found an error, I corrected it and ran the script again. Some of the errors consisted of a bad backslash, wrong file format, missing spelling or words, and fixed the data source. The result of the script prints all the names of the layers in the data frame.
The third script contained errors. For this script, I added a try except statement to print the errors for Part A and have Part B run successfully. In order to do this, I had to find the errors by running the script. Once I found one error, I put the try-except statement before the line that contained the error. I ran the script again to determine the other error. I used the general exception to print the errors. Part B then successfully ran. The result of the script prints the errors in Part A and prints Spatial Reference and Map Scale.
The flow chart illustrates script 1.
Layer Details |
Layers in the Data Frame |
The third script contained errors. For this script, I added a try except statement to print the errors for Part A and have Part B run successfully. In order to do this, I had to find the errors by running the script. Once I found one error, I put the try-except statement before the line that contained the error. I ran the script again to determine the other error. I used the general exception to print the errors. Part B then successfully ran. The result of the script prints the errors in Part A and prints Spatial Reference and Map Scale.
Try-Except Statement |
The flow chart illustrates script 1.
Flow chart of Script 1 |
Lab 3: Applications in GIS
This week's lab covers watershed analysis for Kuauai, Hawaii. The first step of the analysis was to use the fill tool to fill any errors in the DEM file. This is down so the flow of the watershed is more accurate and removes the sinks.
The next step was to use the Flow Direction tool. This tool determined all the different directions the water would flow. The tools determines the 8 possible directions by analyzing one cell to the next. The cell direction goes from one cell to the lowest.
The third step uses the Flow Accumulation tool. This tool produces a layer that accumulated the cells and collects a cell count. This shows the number of cells that would flow using the flow direction raster. The flow accumulation for the cell represents the upstream cells that flow in that direction. I then added a threshold of 200 cells. This produced an output that contained steams with 200 cells or greater.
The next step was to use the Stream to Feature tool. This coverts the streams to vector files and also maintains the direction of the streams. By using the Stream Link tool, it allowed me to clearly identify individual streams. Next it was important to determine the hierarchy and scale of the streams by using the Stream Order tool. This also looks at the flow direction of the streams.
Once the previous steps are completed, I used the watershed tool. The output showed the where the streams drain out to. I then added a pour point to the map at a location where the stream drains to the ocean. The watershed tool shows where the watershed drains to for that pour point.
The results of this analysis compares streams to streams of the National Hydrography Dataset. I then also compared the modeled watershed to the NHD watershed.
The next step was to use the Flow Direction tool. This tool determined all the different directions the water would flow. The tools determines the 8 possible directions by analyzing one cell to the next. The cell direction goes from one cell to the lowest.
The third step uses the Flow Accumulation tool. This tool produces a layer that accumulated the cells and collects a cell count. This shows the number of cells that would flow using the flow direction raster. The flow accumulation for the cell represents the upstream cells that flow in that direction. I then added a threshold of 200 cells. This produced an output that contained steams with 200 cells or greater.
The next step was to use the Stream to Feature tool. This coverts the streams to vector files and also maintains the direction of the streams. By using the Stream Link tool, it allowed me to clearly identify individual streams. Next it was important to determine the hierarchy and scale of the streams by using the Stream Order tool. This also looks at the flow direction of the streams.
Once the previous steps are completed, I used the watershed tool. The output showed the where the streams drain out to. I then added a pour point to the map at a location where the stream drains to the ocean. The watershed tool shows where the watershed drains to for that pour point.
The results of this analysis compares streams to streams of the National Hydrography Dataset. I then also compared the modeled watershed to the NHD watershed.
Watershed Analysis |
Wednesday, June 1, 2016
Lab 2: Applications in GIS
Lab 2 covered least cost path and corridor analysis. To perform the least cost path analysis, I reclassified land cover, elevation, and the euclidean distance found around the roads. Once the layers were reclassified, I needed to use the Cost Distance tool. This tool finds the lowest cost path from each of the national parks. To find the least cost path, I would need to use the Cost Path tool.
However, to determine the national park corridor, I did not need to use the Cost Path tool. Once the Cost Distance outputs were created for both national parks, I used the Corridor tool. The tool requires the two cost distance outputs. Once the Corridor layer was added, I needed to adjust the symbology. I had to determine a threshold for the corridor. I needed to make sure the corridors were not too wide. I adjusted the colors to include 3 levels, least to most suitable. Least suitable is the lightest and the most suitable is the darkest.
However, to determine the national park corridor, I did not need to use the Cost Path tool. Once the Cost Distance outputs were created for both national parks, I used the Corridor tool. The tool requires the two cost distance outputs. Once the Corridor layer was added, I needed to adjust the symbology. I had to determine a threshold for the corridor. I needed to make sure the corridors were not too wide. I adjusted the colors to include 3 levels, least to most suitable. Least suitable is the lightest and the most suitable is the darkest.
Most Suitable Corridor |
Programming: Module 3
Module 3 lab involved using modules, conditions, and loops. To get the results of the screenshot below, I had to import the math module. Part of the script creates a list of players and the dice game results. Next, I created a while loop that randomly selects 20 integers ranging from 0 to 10. I then printed one list of all the integers. From the random numbers that were selected, if the number 8 was in the list, I had it removed. To do this, I counted how many times the number 8 appeared in the list. Once I had the number, I used a while loop inside a condition statement to remove the number 8 until it was gone. By counting how many 8's there are, that determined if the while loop continued to run. The results printed.
I felt this lab was rather difficult. It was a challenge to figure out the correct syntax and order of each operation. By working through the lab, I learned quickly how important indenting is to the script. Below is a flow chart of the script.
Script Results |
Flow Chart of Step 4 |
Subscribe to:
Posts (Atom)