Sunday, October 30, 2016

Special Topics: Lab 10

This week's lab was an introduction to statistics which involved correlations and bi-variate regression.  Part of the lab consisted of finding missing data for 20 years of rainfall for a rain station.  To find the missing data, I used the regression tool from the Data Analysis Toolpak.  Once I had the regression summary, I found the slope and intercept values.  I used the values in the formula y = m*x+b.  The x variable was the data I had for rain station B.

By using  y = m*x + b, it assumes that whenever x is 0 y equals the slope plus the intercept.  The intercept tells us how much change is in each variable while slope tell us how much the variable will go up or down.  It assumes if I have x, I can figure out y.  The regression analysis looks at where station A and B for the years there is data for both.  It finds the relationship between the stations.

Sunday, October 23, 2016

Special Topics: Lab 9

This week's lab covered vertical accuracy of a DEM.  I used data points on top of the LIDAR layer and used the Extract Values tool.  Once the values were extracted, I had to convert the values from feet to meters.  The next thing was to calculate the difference between the LIDAR values and the field points.  Once the difference was calculated, I squared the difference and then summed it.  The next step was to find the average and then take the square root to find the Root Mean Square Average.

Once the RMSE was calculated, to determine the 95th percentile accuracy, I multiplied the RMSE by 1.96.  To find the 68th percentile accuracy, I multiplied the RMSE by 1.69.  The lower the number, the more accurate it is.  To determine if there was a bias, I had to find the Mean Error.  To find that, I took the sum of the difference and then take the average.  Below are the results.  The results show the most accurate and that the urban area had the most bias.

Accuracy Results

Sunday, October 16, 2016

Special Topics: Lab 8

This week's lab covered interpolation.  The example used was for water quality of Tampa Bay.  Four different techniques were used to display the Biochemical Oxygen Demand (BOD).  One technique used non spatial analysis.  The other technique was Thiessen interpolation.  This analysis used the Create Thiessen polygon tool.  The input features are the BOD points and the output is all fields.  A mask needed to be applied to only show the Thiessen polygon for Tampa Bay.  This used the points in each polygon to determine the whole value.

Technique three used Inverse Distance Weighting interpolation.  The IDW tool is used to perform this and I had to adjust the radius and power.  The last technique was spline.  The Spline tool was used for both regularized spline and tension.  Since there were points very close to each other, it caused high concentrations.  The points needed to be removed or the average to be calculated.

The results of the analysis show fairly similar for each technique except the regularized spline.  Below is an example of the tension spline.
Tension Spline

Sunday, October 9, 2016

Special Topics: Lab 7

The lab for this week compared TIN models with DEM models.  The TIN model shows the terrain and lake with fairly rigid lines. The DEM model shows the changes in elevation with far smoother contour lines.  The TIN didn't really show the terrain well, so I had to adjust the TIN by using the Edit TIN tool.

It is interesting that DEMs and TINs can be created from elevation points.  DEM models are represented by rasters while TINs are represented by vectors.  Since DEMs use pixels, the the changes in elevation seem smooth.  Creating a DEM with slope, aspect, and elevation was also different than using triangulated elevation points.
Modified TIN

Sunday, October 2, 2016

Special Topics: Lab 6

This week's lab covered location allocation.  To perform this analysis, I added the distribution centers as the locations.  I added the customers as demand points.  I added the analysis settings and used an output of straight lines.  There was no impedance cutoff and it used all of the facilities.  After solving, not all of the customers seemed to be assigned the closest center.  Then I reassigned the market areas by performing a spatial join of market areas and the customers.  I performed table joins to figure out how many customers go to which facility.  The Summary Statistics tool which counted the customers in market areas.

Next I created a new feature class by joining the unassigned market area and new table.  Now the market areas are reassigned.

The weakness is that the allocation did not always have distribution centers going to the closest customers.  The strength is the ability to analyze all of the settings and inputs very quickly to show market areas.
Location Allocation