Monday, May 8, 2017

Spectral Signature Analysis and Resource Monitoring

Goals and Background

           The main goal of this lab is to gain experience in the measurement and interpretation of spectral reflectance of a number of earth surfaces. From there, basic monitoring of earth resources using different bands will be completed. Erdas Imagine will be used to  collect, graph and analyze the spectral signatures of the earth surfaces using there spectral signatures. Additionally, the health of vegetation and soil will be explored by using a simple band ratio technique.

Methods

         A personal folder was created prior to the start of lab 8, in order to ensure that the data was saved in the correct spot throughout the lab. Part 1 involved spectral signature analysis. A Landsat ETM+ was used that covered the Eau Claire Area and other regions around Eau Claire to analyze the spectral signatures of certain earth surface features and near surface features. The images used were from the year 2000. The spectral reflectance of 12 materials and man made surfaces will be looked at.

1. Standing Water
2. Moving Water
3. Deciduous forest
4. Evergreen Forest
5. Riparian Vegetation
6. Crops
7. Dry Soil
8. Moist Soil
9. Rock
10. Asphalt Highway
11. Airport Runway
12. Concrete Surface

          Erdas was opened and the image of Eau Claire and surrounding areas was opened. From there under the home tab, the drawing tool was selected and the polygon tool from inside the drawing tool was selected. A polygon was made on Lake Wissota to get the spectral reflectance of standing water. Next, by clicking on raster and then supervised and signature editor, the signature editor is opened. The class one label was switched to standing water, and the rest were changed to there correct label as they were completed. By clicking on the display mean plot window at the top, the user can see what the spectral plot looks like for the area inside the polygon that was made on Lake Wissota. The same process for collecting the spectral signatures for surface 1 is used for surfaces 2 through 12. All of the signatures are going to be displayed in one signature mean plot window, in an effort to get a good idea of how the features vary. The scale chart to fit current signatures button can be used to fit all the different signatures in one frame. The chart background color was changed to white in an effort to increase visibility on the graph, from there the values were recorded and accessed.

        Part 2 involves resource monitoring. This is done by performing a band ratio to view the health of vegetation and soils. The band ratio is normalized by the normalized difference vegetation index (NDVI), and the equation is shown below.

NDVI= (NIR-Red)/(NIR+Red)

       With a fresh Erdas opened, a viewer was added and the second image provided in the lab was added. From there, by clicking on raster, then unsupervised, then NDVI, the indices interface will be opened. A folder name (NVDI) was added to the lab 8 folder in the personal drive and the output was saved there. Be sure the sensor reads 'landsat 7 multispectral,' and that the function is the NDVI. After completion the image is viewed and the vegetation is documented. A map was generated from this tool and it is displayed below by figure 6.

       Section 2 of Part 2 is to perform the same steps as listed above for section 1 of part 2, though the goal now is to monitor the spatial distribution of iron contents in soils with Eau Claire and Chippewa counties. The equation used is displayed below.

Ferrous Mineral= (MIR)/(NIR)

       Be sure to change the select function to ferrous minerals this time so the tool outputs the target variable. A map of the ferrous minerals is displayed below by figure 5. The band values are displayed below for further discussion in the results section.

Band 1 (Blue): 0.45-0.52
Band 2 (Green): 0.52-0.60
Band 3 (Red): 0.63- 0.69
Band 4 (NIR): 0.77-0.90
Band 5 (Short-Wave Infrared): 1.55-1.75

Band 6: (Thermal Infrared): 10.40-12.50


Results

        Figure 1 below shows the first step of the lab, it shows how a polygon was drawn on standing water, and the table to the left shows how the values were displayed. The signature mean plot for the standing water is displayed on the right.


Figure 1

            Figure 2 below shows the signature editor in use in Erdas. It displays the values for the red, green and blue bands.




Figure 2

         Figure 3 below displays the signature mean plot of each of the surface and near surface features in the lab. This is an effective display because it shows the values all next to each other which makes the results meaningful because it is easy to note the differences.


Figure 3

         Figure 4 below displays the distribution of ferrous minerals in Eau Claire and Chippewa Counties. The distribution of more minerals is certainly centered to the west. This would make sense because there is less tree cover, and soil that has been more eroded and had time to form minerals. 




Figure 4
         Figure 5 is a map that displays the areas of heavy vegetation in the areas of Eau Claire and Chippewa Counties. The entire eastern side is covered in thick vegetation, mainly in the northeast corner. Whereas in a line to the southwest of lake Wissota there is a good amount of land that does not have vegetation on it.



Figure 5



Sources

Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.

Monday, May 1, 2017

Photogrammetry

Goal and Background

         The purpose of this lab was to gather an understanding of photogrammetric tasks that can be preformed on aerial photographs and satellite images. Also the skills of how to obtain and interpret spectral reflectance, and to also understand why certain earth features react the way they do when taken in an aerial photograph. Additionally, the mathematics behind photographic scales are explored, including area and perimeter. Finally, the complex process of performing orthorectificaiton on satellite images was done, which is a very applicable skill to have, as it ensure the images are geospatially accurate.

Methods

        Before beginning lab 7, a specific lab 7 folder was created inside the Q drive to ensure that all data was saved in the same spot and so the professor can go through and view all the work done throughout. The first step of the lab was to determine the distance between two points that would be used to calculate the scale of the image. The calculations are discussed in the next sentence. 1. The distance is  2.7 inches on the monitor, 8824.47 feet in real life. 2.7 inches/ 8824.47 feet x 12 inches. 2.7 inches equals 104,869.64 inches and one inch equals 39,210.9778 inches. The scale is 1:40,000.

       Next, the altitude of where an image was taken was given and the goal was to determine the scale, the elevation of Eau Claire county was also given to add in the calculations and they are shown below. 

            S= (f)/(H/h)
f= 152mm
H= 20,000 feet
h= 796

S= (152mm)/(20,000ft-796ft)
S= (5.98in)/ (240,000in – 9552 in)
S= (1 inch)/( 38536.45 in)
Scale is 1:39000

        The next step of the lab was to digitize an area around a pond to find the size of it. The measure perimeters and areas tool was used to draw the polygon and the area was then calculated. The pond was 93.67 acres. 

       The relief displacement of a smoke stack in an aerial image is going to be determined in the next part of the lab. The displacement between the principal point and the top of the stack is .354 inches. The tower should be moved .354 inches towards the principal point in order to account for this discrepancy

       The next part of the lab sought to generate a three dimensional image using an elevation model. The goal is to evaluate relief displacement and how it affects aerial images. The previous steps done were completed in order to be able to do this. The anaglyph tool was used in Erdas to made an image that could be viewed with Polaroid glasses. The vertical exaggeration was set to 1 and the rest of the defaults were accepted. The model was ran and saved in the lab 7 folder and the results were viewed in Erdas. The elevation features of Eau Claire are clearly evident after running this tool. 1. The elevation features in Eau Claire are very prominent and they show up very clearly. For example the hill on the UW campus and near Mt. Simon. 

        The third and final part of the lab involved the orthorectification of satellite images, this process is very lengthy and it was over 3/4 of the lab and took many hours to complete. The first step of the lab is to create a new project. An image of Palm Spring California was opened in Erdas and the photogrammetry project manager was used to create a new block. The polynomial based pushbroom was used and the SPOT pushbroom was selected. 

        From there the horizontal reference sources were done. The projection chooser was opened and the projection type was changed to UTM, from there the spheroid name was selected as Clarle 1866. The datum name was changed to NAD27(CONUS) and the UTM zone was changed to 11. No changes to the vertical section were needed.

        Next, the GCP's were placed to make sure the images were spatially accurate. The start measurement tool was clicked and the tool was activated. The classic point measurement tool was used and the second image was added. Screen captures of the desired x and y coordinates were provided to ensure data quality. The points were added by clicking on the ortho image that was in the left view, the create point icon was clicked on and the values were very close to the x and y provided. Next the corresponding point was found on the spot_pan image in the right viewer. The create point was used again and the coordinates were checked. The next 9 GCPs were collected the same way and they were saved. 

        Next, the second image was added to the block file and the GCP's were collected for that image. The type and usage were set for each of the control points and the tie points were collected. The point measurement tool was again used and the classic point measurement tool was used. The same process described above was again used to collect GCPs.  

         Then, the automatic tie point collection was completed. This is a necessary process to do before the orthorectification process of the two images in the block. The tie point collection process measures the image coordinate positions of ground points appearing on the overlapping areas of the two images. After that, the images were triangulated and the images were orthorectified by clicking on the start ortho resampling process icon. The DEM for palm springs was used as the DTM source and the output cell size for x and y was changed to 10. The file was saved in the lab 7 folder created a the beginning of the lab and the resampling technique was bilinear interpolation. Also be sure to use the current cell sizes. Once the images are done being orthorectified they can be view to take note of the discrepancies between the images. The images can be viewed by clicking on the plus sign next to the ortho folder, repeat the process with a second viewer. Sync the views and zoom in and then use the swipe tool to see the quality of the output. 


Results

        A stereoscopic image is effective in showing things like elevation of the terrain as well as man made features that are higher than the ground around them. When viewed with the 3D places the building show up very clearly, they almost appear how it would look in reality. In comparison the 2D images that are standard do not show the elevations of the features.

Figure 1
      
         The resolution is definitely better in the right image of figure one compared to left image. The left image shows the elevations much better, it is harder to notice elevation changes of the landscape on the right image. Though the right image depicts the heights of the buildings and other structures much better. This could be because it appears that the right image was taken close NADIR, which means the camera was nearly perpendicular with the area of interest. This can be noted when looking at the smoke stack to the west of towers halls. 



Figure 2

                The distance between A an B is 2.7 inches on the monitor, 8824.47 feet in real life. 2.7 inches/ 8824.47 feet (x) 12 inches. 2.7 inches equals 104,869.64 inches and one inch equals 39,210.9778 inches. The scale is 1:40,000. It is important to view the image at the size of the screen, if the image is re sized a different size of scale would be interpreted. 



Figure 3
       Figure 3 shows the step when the image is ready for triangulation.



Figure 4
      Figure 4 above shows the process that was used to put in the GCPs, the example show is from when the second image was added. The highlighted row shows how the GCP was added to the second image.


Figure 5
    
           Figure 5 above shows much how the images are not matched up as well as one would hope. The changes in color from the white ridge there are apparent and easily notable. The swipe tool was used in Erdas after linking the views in order to get a good idea of the quality of this output. 




Figure 6

       Figure 6 shows another issue between the images. It is clear that the images are not perfectly aligned. Though if the images are viewed at full extent there is not a noticeable difference between them. 


Figure 7

          Figure 7 above shows what the two images looked like after the orthorectification. Upon zooming in close on where the pictures overlap, they are not that spatially accurate, this is talked about in more detail above and displayed by figures 5 and 6. 1. The degree of accuracy between the two orthorectified images is a little disappointing. There is a sort of stair step effect that can be noted, the images do not overlap perfectly as shown by the image below. The ridge that appears as a white line in the middle of the image shows that they are not spatial accurate. 


       After completion of this lab, there is a number of photogrammetric tasks that could be duplicated. This lab was very strenuous and long and there was a few errors that occurred throughout, aside from the data corruption issues that occurred this was a very beneficial lab that has many real world applications.


Sources

National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005. 

Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of
Agriculture Natural Resources Conservation Service, 2010. 

Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa are from
Eau Claire County and Chippewa County governments respectively. 

Spot satellite images are from Erdas Imagine, 2009. 

Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009.  


National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.  

Thursday, April 13, 2017

Geometric Correction

Goals and Background

         The purpose of Geometric Corrections lab is to develop skills using two major types of geometric corrections. These corrections are often performed on satellite images before processing in order to better the data quality and integrity. This process helps to align aerial images. Aerial images are rarely if ever perfectly inline due to factors such as the differences in altitude or the angles of the images. Rectification of an image is the process of changing a data file coordinate to a different coordinate system known as a reference system. The two forms of geometric correction are listed below.

1. Image-to-Map Rectification: Map coordinates systems are used to rectify the image data to the correct pixel coordinates.

2. Image-to-Image Rectification: Previously corrected images of the same locations are used to rectify the image data pixel coordinates.

Methods

        The first method that was used was the image-to-map rectification. The Chicago_drg.img was brought into viewer one and fit to frame, this is a USGS 7.5 minute raster graphic (DRG), that covers part of the Chicago region and also adjacent areas. A second viewer was then opened and Chicago_2000.img was opened there.
       Under the multispectral tab in the top right, control points was selected. Under the Select Geometric Model tab the polynomial box was checked. By selecting the geometric model, two tools were opened, the multipoint geometric correction tool and the GCP tool reference setup. All of the default settings were accepted in the new viewer. The Chicago_drg.img was then brought in from the lab 6 folder that was previously copied over into my own personal folder in the Q drive. In the reference map information click okay, from there the polynomial model properties was displayed and before the addition of GCP's it reads model has no solution. Make sure to keep both images at full extent, as the software will crash repeatedly if that is not done.Before additional GCP's can be added the previous ones are deleted, this is done by highlighting the GCP's and right clicking and selecting delete selection.
        Next, three pairs of GCP's were added on the images, this was done by clicking on the Create GCP tool. Once three are added the image will now read "model solution is current," and now GCP's can be added by clicking on only one of the images. Look at the root mean square (RMS) error to see how accurate the GCP's are. For this part of the lab the RMS error should be under 2. The GCP's can be moved by zooming in and moving the GCP until the RMS error in the table on the bottom gets under 2. This process was repeated for all the GCP's. A screen capture is provided below in figure 1 that also shows the table, showing the RMS error under 2.
         From here, the display resample image was clicked and the output image was rename to Chicago_2000gcr.img and saved in the folder in the Q drive. All of the default parameters were accepted and the image was then brought into a viewer to view the improvements.


Figure 1
   


            In part 2 of this lab the majority of the steps are repeated from part 1. Image sierra_leone_east1991.img was brought in and fit to frame and the second image sl_reference_image.img was brought into the second viewer. The swipe function was activated to see the extreme distortion in the images. From there follow the steps described above to get to the point of inserting GCP's. All the same steps were followed for all GCP's added. The Display Resample Image button was clicked again and the image was saved as sl_east_gcc.img in the lab 6 folder in the q drive. The resample method was changed to bilinear interpolation and all other defaults were accepted. This processing takes some real time, so be prepared to wait. The corrected image was then brought into Edras to take note of how much better the quality is.


Figure 2


Results

        This lab helped to give a good basic skill set in Geometric Correction. Making sure that images are correctly rectified is essential to putting out high quality and accurate images for analysis. It is remarkable how an image can appear to be correct in comparison to another one, but looking at the root mean square error can show something different. The ground control point locations this lab were previously selected by Professor Wilson, though selecting good locations for GCP' is essential to the process.

Sources

Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey

Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.


Wednesday, April 5, 2017

LIDAR Remote Sensing

Goals and Background

           The goal of this lab is to become familiar with the structure and processing of LIDAR data. In order to gain knowledge about LIDAR lab 5 emphasized how to process and retrieve different surface and terrain models. Additionally, processing and creation of intensity images was done along with deriving outputs from a point cloud. In this lab the LIDAR data that was used was in the LAS file format. Working with LIDAR is an essential tool as it is an extremely quickly growing field.

Methods

         For this lab ArcMap and Erdas will both be used. To begin lab open up Erdas, in the viewer go to open and select the files that were provided in the LAS file in the lab 5 folder. From there be sure to change the files of type to LAS as Point Cloud (*.las). After the file type is changed all of the data can be brought in to Erdas. Be sure to uncheck the always ask button and to click no. This step takes a while for the point cloud to load. When working with an unprojected data set such as this it is important to take a look at the tile index and the metadata. By opening ArcMap and bringing in the QuarterSection_1.shp one can be sure that the point cloud was displayed in the correct area.

       Next, close Erdas and ArcMap and open a blank page in ArcMap. The goal of this next objective is to create a LAS dataset, explore the properties of the LAS dataset and to visualize the dataset as a point cloud in both 2D and 3D. After connecting to the student folder using ArcCatalog a new LAS dataset was created named Eau_Claire_City. The same files from Erdas were then added into ArcMap by clicking on add files. After the data is in, click on calculate which is under the statistics tab, this will calculate the statistics for the dataset. These statistics are used to ensure data quality and help make sure that the LAS Dataset is accurate.

        The next step is to add a coordinate system to the LAS Dataset. No coordinate system was specified, so the metadata was used to find the information regarding the coordinate system. After consulting the metadata it is possible to define the (XY) and (Z) coordinate systems. NAD 1983 HARN Wisconsin CRS Eau Claire (US Feet) is used for the (XY) coordinate system, while NAVD 1988 US feet is used for the Z coordinate system. To make sure that the dataset is in the correct spatial location a shapefile of Eau Claire county is brought into ArcMap. After a little examination it is clear the dataset is in the correct location.

        Next, be sure the LAS dataset tool bar is active, this will be used to visualize the point cloud and it will be used later to help generate other products. Under the properties tab of the Eau_Claire_City shapefile that was created earlier change the number of classes from 9 to 8. When zoomed out at the full extent the points may not be visible, this is done to make the software faster and not bog it down, upon further inspection by zooming in the data will appear.

         Using the LAS dataset tool bar expand the surface menu, from there aspect, slope and contour will be assessed one at a time. Next, the contours will be used to help give a better idea of what will be generated with the DSM. The contour interval can be changed and it is essential to see how it affects the display. Another way to change the contour is to go into the layer properties tab and click the filter tab, on the bottom right of the tab is four predefined settings that use different methods of classification.

        In this next step there will be DSM's and DTM's created using the same pointcloud. The raster products were created at a 2 meter spatial resolution. There was four products created, a DSM, DTM and a hillshade of both the DSM and DTM. The LAS dataset to raster tool was used here which is found under conversion tools. Some defaults were accepted, though the sapling value field was changed to 6.56168 each time which is approximately 2 meters. The hillshade tool is found under 3D analyst tools and raster surface. The defaults there are all acceptable, just be sure the place it will be saved is somewhere easily accessible.

        The final step of this lab is to create a LIDAR intensity image from a point cloud. This was done very similarly to the step above. The LAS dataset to raster tool was used and the value field was changed to intensity, the void fill changed to natural neighbor and the same cell size used in the DSM and DTM is acceptable. From here the image was brought into Erdas, where it was automatically enhanced, take note that the file needs to be brought in as a Tiff.

Results

       

Figure 1 DSM

          Figure 1 above is a screenshot showing the grid that was used throughout the exercise that is picture by the red lines. The image is a Digital surface model, meaning that it is bare earth, there is no vegetation or trees included. In contrast, figure 2 below is a Digital terrain model that shows the terrain of Eau Claire including the trees and buildings. Each has there own uses. The DSM shows the elevation changes much more clearly, while figure 2 gives a better representation of how populated the area is and what the density of vegetation is.




Figure 2 DTM

       Figure 3 below shows the intensity output that was created. In ArcMap the image was very hard to see anything, it is nearly impossible to differentiate anything.

Figure 3 shows the intensity output on ArcMap
        Figure 4 below shows the intensity image that was created in ArcMap, though it is displayed in Erdas in figure 4. When bringing the image into Erdas as a Tiff. Erdas automatically enhances the image. When looking at the image below it is clear that it has very high spatial resolution, this is an image that would be very well suited for aerial image interpretation. It is a very clear image and it is easy to tell what is water, vegetation or man made buildings.



Figure 4 shows the intensity output in Erdas



Sources

Lidar point cloud and Tile Index are from Eau Claire County, 2013.
Eau Claire County Shapefile is form Mastering ARcGIS 6th Edition by Margaret Price, 2014. 

Tuesday, March 28, 2017

Miscellaneous image functions

Goals and Background

           Before completing lab 4 a general understanding of Edras Imagine and its functions is necessary. Through the first 3 exercises this semester Professor Cyril Wilson led classroom discussions and hands on in class tutorials of how to complete various tasks. This lab is designed to further skills in a number of different areas. These skills include delineating a study area from a larger satellite image, showing how changing spatial resolution can better the uses for visual interpretation and becoming familiar with radiometric enhancement techniques. From there it is also important to be able to link Google Earth with Erdas to utilize the benefits of both. Additionally, resampling , image mosaicking and binary change detection methods were also learned throughout this lab. The purpose of this lab is to gain skills in all of the areas discussed above.

Methods

Part 1

         The first step to this lab is to open Erdas Imagine and then to set up a designated area to save the project that has a specific title including the users last name so it will not be mixed up. From there the image eau_claire_2011.img was brought into Erdas, it is important to note that all the images used throughout the lab were provided from professor Wilson. Next by clicking on the raster tab and then right clicking on the image and selecting the inquire box a white square box will be displayed. The box was placed in the Eau Claire/ Chippewa area by dragging with the left mouse button. Next, the "subset and chip" and "create subset image" tools were used to create a subset of the image, and this was then saved under a specific name. The area that was captured is depicted as figure 1 below.

Figure 1


          In section two of the lab the same image from the first step was used again. Next, the subset image that was created before was brought in an additional viewer and the file type was changed from (img.) to (shp.). After that the shapefile was overlaid on top of the eau_claire_2011.img. Next the shapefile was selected to show the area of interest by clicking on the two counties, Chippewa and Eau Claire one after the other, the areas should turn from blue to yellow. From here the area of interest is saved as an (aoi.) file and saved into the specific folder that was created at the beginning of the lab. By clicking on raster, then "subset and chip" as in step one the section is brought into the subset window. Figure 2 illustrates a screen capture of the finished product.

Figure 2


Part 2

         The goal of part 2 is to create a higher spatial resolution image from one that is more course in order to optimize the viewers experience. The image ec_cpw_2000.img was brought into the viewer and a second viewer with ec_cpw_2000pan.img was also brought in. From here the pan sharpen tool was used to sharpen the image from 30 to 15 meters. This control is found under the raster tools, pan sharpen and on the pull down menu resolution merge. From here under the resampling techniques, nearest neighbor was selected. An image fusion folder was created for the output images. The output images are a layered photo of both the multispectral and panchromatic bands.

Part 3

         Part 3 has the goal of using radiometric enhancements techniques in order to remove haze from an image. The haze reduction tool was used and a specific folder was created for the output image. For this exercise all the default values were used and a second viewer was opened in order to be able to see the differences between the images.

Part 4

        The goal of part 4 was to use a recent development in Erdas that allows the user to synchronize Google Earth imagery from GeoEye (high resolution satellite) with the Erdas platform. By syncing the views it is clear to see the potential uses for this, as google earth is very good resolution.

Part 5

         Part 5 involved resampling some images, which is the process of changing the size of the pixels. An image can either be resampled up or resampled down though there is no use to resample down. The image was brought in and fit to frame, then under the spatial tab in the top right on the pull down menu resample pixel size was selected. Again, a special folder was created to save the output images, the output cell size was changed from 30x30 to 15x15 meters and the default nearest neighbor method was accepted. Next, the same process was ran again, this time bilinear interpolation was selected rather than nearest neighbor and the bilinear interpolation can out much better.

Part 6

         The goal of part 6 is to use image mosaicking to look at multiple different satellite images as one. Both the images were capture in May 1995 so the images were taken at roughly the same time. When bringing in both of the images, be sure to click on multiple images in virtual mosaic and make sure that background transparency is also checked. It is important to note that the images will be brought in one at a time but with the same steps as described above. The next task was to use Mosaic Express to create one seamless tile. The mosaic express button was selected and both of the images were added and saved into another specific new folder in the lab 4 folder. The default parameters were accepted and the model was then ran. The results are shown below.

Figure 3


         The next step of this part focuses on using a much more advanced mosaic routine labeled MosaicPro in order to cut down on the amount of differences where the images are stitched together. Once the images were brought in, the order in which they were displayed was manipulated to see which would be better. In the color corrections tab the histogram matching was set to overlap areas. Then by clicking on the set overlap functions key the default was selected which was overlay. The mosaic was processed and the results were much better than the first way the mosaics were done.

Figure 4


Figure 5



Part 7

         Part 7 focuses on binary change detection and image differencing. The change in brightness values from 1991 to 2011 for Eau Claire county and four surrounding counties is what is being looked at. The images were brought into Erdas in two separate viewers. Under the raster tab, two image functions was selected in order to get to two input operators, this is the tool that performs the operations on the images. The image differencing was saved in a new folder in the lab 4 folder. Be sure to change the operator from + to -. From there open up the metadata and the histogram to see the results.

        Next a map of the changes from each image was made using spatial modeler. The equation highlighted below in orange shows how the negative values were removed from the differences in the images.

ΔBVijk = BVijk(1) – BVijk(2) + c 

ΔBVijk(1)= Brightness values of 2011 image.
BVijk(2) = Brightness values of 1991 image. 
c = constant: 127 in this case. 
I = line number
J = column number 
K= a single band of Landsat TM



Figure 6

          Figure 6 above is a screenshot of the final image after running the image differencing. The red illustrates the changes from 1991 to 2011 and the grey had no change.


Results

         Completing this lab led to increases in many of the skills that were introduced prior to completion of the lab. This lab covered a number of tools and processes that will have use in the long term. Using subset and chip, smaller images can be taken out of larger originals. Using the pan sharpen tool allows for the image to be more pleasing to the viewer and better for image interpretation, there is significantly less of a pixelated look. Additionally haze reduction is a tool that should be used almost any time there is aerial imagery because there is particulates and water vapor in the atmosphere that creates the haze that is seen. After haze reduction the image showed up darker and more defined, the white was gone and the bodies of water appeared much darker and clearer. Using google earth along with Erdas is also a critical tool. With the viewers linked if the Erdas image becomes too blurry, the google earth imagery is still very clear, this is essential in image interpretation. When looking at resampling, nearest neighbor did not show a large difference. In contrast the bilinear interpolation was a significant difference, the image was much sharper and clearer. The mosaic express tool was effective, though it was not ideal because there was still a clear line between the pictures. In comparison the MosaicPro had a smooth and unnoticeable transition between the images and the colors were also much closer as shown above in figure 4.
         In conclusion was the image differencing from 1991 to 2011. The majority of the changes occurred not near urban center but more in rural areas. This could be because of a change in land use from agriculture to residential or vice versa, along with countless other possibilities. 




Sources

Doctor Cyril Wilson, University of Wisconsin Eau-Claire, Spring 2017, Geography 338