Tuesday, November 25, 2025

Module 5 - Unsupervised and Supervised Image Classification

 

Figure 1: Map of Current Land use in Germantown Maryland

Exercise 1:

This exercise focused on completing unsupervised classifications. In supervised classification, there is a training phase where pixels from known classes are used to inform the classification process. The software groups pixels based on their spectral characteristics, and at the end of the process, the user classifies the grouped classes. The accuracy of the classification is influenced by maximum iteration settings and the convergence threshold. The iterations enable repeated analysis of the area, while the convergence threshold determines how confident the software is that the pixels are accurately classified. Additionally, the skip factor governs how pixel analysis is conducted and affects processing time. For example, a skip factor of 1 analyzes pixels one by one, whereas a skip factor of 2 means that only every other pixel will be analyzed.

The most challenging aspect of this exercise was selecting the appropriate training pixels and ensuring they were from the correct areas. This led to issues later when random pixels from incorrect areas—denoted as "mixed"—appeared in blatantly incorrect locations.

We also explored different comparison methods for analyzing the original and reclassified images using the toggle, flicker, blend, and highlight tools. These tools help identify areas that might have been misclassified.

The best tool introduced in this lab was the record tool, which allows us to combine multiple classes into fewer classes. For example, we took the UWF50 image and reduced it from 50 classes to 8 classes.



Exercise 2:

Exercise Two focuses on supervised classification, where the analyst trains the software to select classes based on pixel values and their surrounding neighborhoods. We were introduced to the Signature Editor, utilizing the polygon tool to select pixels in areas of interest, or we could use the AOI Seed Tool to expand a region around areas of known land cover.

Before adding signatures, we needed to create an Area of Interest (AOI) layer, allowing us to conduct inquiries on this layer to specify where signatures would be added. To create this AOI, we used the Enquire Tool along with known coordinates. While we had coordinates for most of our selected classes, we had to identify our water and road features without specific coordinates. Initially, this was challenging, but I realized I could move the cursor and adjust the Enquire Tool to target the road and water features for the Grow Tool.

I attempted this three times to achieve a satisfactory classification and found that the AOI Seed Tool was the most effective for this task, as it captured the pixels with similar spectral values better than my polygon drawing skills.

Another important skill gained from this exercise was analyzing histogram plots and mean plots to minimize spectral confusion by identifying bands with the least separation between signatures.

We then proceeded to classify the images using Maximum Likelihood Classification, a parametric method based on the probability that a pixel belongs to a specific class, as it computes the likelihood of a pixel corresponding to a particular spectral signature.

Next, we created a Distance File that calculated the spectral Euclidean distance. In this file, brighter pixels indicated a higher probability of misclassification. Once we analyzed, confirmed, and refined our results, we merged the multiple classes using the Record Tool, just like in Exercise 1. Finally, we used the Calculate Area Tool to determine how much of the area was affected.


Sunday, November 16, 2025

Module 4 - Spatial Enhancement, Multispectral Data and Band Indices

In this week's lab, we encountered four tools that will be key in helping identify different features during image analysis: Histogram Analysis, Image Grayscale Analysis, Multi-spectral Band Experiments/Analysis, and Image Brightness Analysis. To practice this, we were tasked with finding features that fit certain criteria using the aforementioned methods. Furthermore, once those features were selected, we needed to choose multispectral band combinations that helped distinguish these features on the map. Below are the maps generated for each of those features.
Figure 1- Identification of water features. 

Feature 1: WATER

For identifying the feature in Layer_4, there is a spike between pixel values of 12 and 18. This is quite straightforward: the larger the feature, the larger the spike in the histogram due to the high concentration of pixels at that brightness level. Additionally, the fact that it is on the left side of the histogram indicates that these are dark features. The large dark body of water explains this spectral signature in the histogram. 

To highlight this feature in the image, the chosen filter was False Color Infrared (bands 4/3/2). This is because water appears dark, creating a stark contrast with the surrounding vegetation. Another acceptable band combination would have been False Natural Color (bands 5/4/3), where the dark blue water would be greatly contrasted against its green surroundings.



Figure 2 - Identification of Snow on the Mountaintops

Feature 2 : SNOW: 


The features that represent both A) a small spike in layers 1-4 around pixel value 200, and B) a large spike between pixel values 9 and 11 in Layer_5 and Layer_6, were deduced to be snow on the mountaintops.

The small spike in layers 1-4 around pixel value 200 represents a small amount of pixels at a high brightness level on the left side of the histogram. These correspond to the small caps of bright snow on the mountaintop. As for the large spikes between values 9 and 11, these represent the dark areas of the mountain surrounding the snow, which outnumber the amount of snow on the mountaintop.

To highlight this feature, the TM True Color combination of bands 3/2/1 was used, as the bright white snow highly contrasted with its dark mountain top surroundings.

Figure 3- Identification of varying water depth. 

Feature 3: Varying Water Depths


To represent the gradient in color of water in relation to its depth, the shallower the water, the lighter the color, and the more noticeable the changes in brightness. Once the water becomes deeper, which is the case for the vast majority of the water features in this image, the bands remain the same color as they correspond to those darker values.

To highlight these features, a custom band combination of 5/2/1, where one can distinctly see the depth dependent gradient from the lighter to the darker water features, was selected.  This combination also somewhat neutralized features surrounding the water and contrasted them at the same time.





Sunday, November 9, 2025

Module 3 - Intro to ERDAS Imagine and Digital Imaging

 

Map of Features observed from LANDSAT Satellite


This week's lab introduced us to ERDAS Imagine, which helps us analyze different types of satellite and aerial images. There are multiple concepts to consider when using various types of images for analysis, such as spatial resolution, pixel size, radiometric resolution, and temporal resolution.

Additionally, learning how to navigate the tools in ERDAS was key to this week's assignment. This included adding images, selecting different views, adjusting the appropriate display settings, navigating individual images, and manipulating spectral band combinations to identify specific elements in an image. Furthermore, we learned how to create a map using an ERDAS image and how to use the INQUIRE box feature to select a smaller section within a larger image.

Finally, we learned about raw/multiple layer continuous data, single-layer panchromatic continuous data, and categorical/single-layer thematic data.



One of the biggest takeaways from this week’s lab is that our eyes can be misleading, which is why we must always rely on scientific methods, processes, and tools. When comparing the radiometric resolution of RRC and RRD, the visual differences may seem minimal. However, using the highly sensitive ERDAS software reveals a significant contrast between Image C (4-bit) and Image D (8-bit). In fact, Image D is 16 times more detailed and accurate during analysis.


This week's lab introduced us to ERDAS Imagine, which helps us analyze different types of satellite and aerial images. There are multiple concepts to consider when using various types of images for analysis, such as spatial resolution, pixel size, radiometric resolution, and temporal resolution.

Additionally, learning how to navigate the tools in ERDAS was key to this week's assignment. This included adding images, selecting different views, adjusting the appropriate display settings, navigating individual images, and manipulating spectral band combinations to identify specific elements in an image. Furthermore, we learned how to create a map using an ERDAS image and how to use the INQUIRE box feature to select a smaller section within a larger image.

Finally, we learned about raw/multiple layer continuous data, single-layer panchromatic continuous data, and categorical/single-layer thematic data.



One of the biggest takeaways from this week’s lab is that our eyes can be misleading, which is why we must always rely on scientific methods, processes, and tools. When comparing the radiometric resolution of RRC and RRD, the visual differences may seem minimal. However, using the highly sensitive ERDAS software reveals a significant contrast between Image C (4-bit) and Image D (8-bit). In fact, Image D is 16 times more detailed and accurate during analysis.

Tuesday, November 4, 2025

Module 2: LULC Classification and Ground Truthing

 

Final LULC Classification Map for Pascagoula, Mississippi


This week’s lab was a whirlwind of both excitement and frustration! We learned about the concepts of Land Use Land Cover (LULC) classifications, which are important tools for understanding areas that have been photographed. Land use and land cover maps help governments, scientists, and others understand the biological and structural composition of areas on Earth. Remote sensing allows us to collect data in remote and hard-to-access locations while being unobtrusive to the geographic phenomena being observed. Land cover describes the biophysical features of the Earth’s surface, while land use refers to how humans shape the land for their purposes. There are multiple levels of land cover classifications, denoted by numerical values; the higher the numerical value, the more detailed and specific the use or cover. In this lab, we focused on the broader classifications 1 and 2. For example, a level 1 classification would be "water," however, this category is further subdivided into “rivers and canals,” lakes, etc. If we were to go into further detailed classifications, we would differentiate between trophic and eutrophic lakes, for example. As you can see, these classifications are essential for understanding different geographic areas.

To classify these areas, we created a polygon feature class for the LULC classifications, drew the polygons over the areas deemed a certain classification, and labeled them accordingly in the attribute table. An important distinction to make is between the minimum mapping unit (MMU) and map scale, which are closely related. The MMU is the smallest feature that can be reliably seen on a map, while scale dictates that minimum mapping unit by determining the ratio of a distance on a map to the distance on the ground. The map scale used in this lab was 1:5,000. This scale was large enough to create the polygons efficiently but small enough to distinguish features on the map, such as shapes, textures, and tones. Creating these polygons was time-consuming and extensive; one has to pay attention to detail. However, ArcGIS Pro's features, like autocomplete polygons and edge/vector snapping, were key in ensuring that the polygons were neat and fit together properly. In a way, we created a giant jigsaw puzzle that represented the different land use classifications, up to level 2, for Pascagoula, Mississippi. Once completed, we ensured that we used "unique values" in the symbology for the polygons to make different land use classes distinct from one another. I ensured that the general level 1 classifications had similar gradient values for their corresponding level 2 classifications to make it easy for the reader to distinguish.

Ultimately, we must double-check our work whenever we present results. The same applies to remote sensing; we want people to be confident in our findings, and therefore we validate. There are three types of accuracy assessments used in remote sensing: overall accuracy, producer's accuracy, and user's accuracy. User's accuracy takes the user's perspective to determine the probability that a classification on the map represents what is on the ground. Producer's accuracy assesses how correctly the mapmaker classified something on the ground. For this lab, we used "overall accuracy." This is the simplest of the methods, where the number of correctly classified sites is divided by the total number of items and multiplied by 100 to express it as a percentage. This method is not without some drawbacks, including not accounting for accuracy among individual classes. This can be an issue if a single class is dominant in an area compared to other classes, which might have occurred during this lab.

Additionally, there are different sampling methods. The selected method was random sampling. This was because it provided an easy, non-biased sample selection for validation, making the process easier, more valid, and offering broader representation of the extent. Since we cannot collect in situ data, we used Google Maps, specifically the satellite view and street mode, to validate our data most closely. We created a feature class for truthing. Using the random sampling mentioned above, we went to the points on Google Maps that corresponded to our sampling locations. A drawback to this method was that there were a large number of points in land classes 51 and 61. Although these were valid sampling sites, they were the simplest areas to classify due to clear distinctions based on color, shape, size, and texture. I believe this might have skewed the accuracy of my results. Had more random sites been selected in the urban built-up area, I believe the accuracy might have been lower.

The calculated overall accuracy percentage is 26 correctly classified sites out of 30 total sites, which equals 86%.


Thursday, October 30, 2025

Module 1 - Visual Interpretation

 

Exercise 1:

 The objective of Exercise 1 was to interpret and distinguish features in an aerial photograph based on tone and texture. Tone refers to the uniformity and intensity of the coloration of a feature. We distinguished between features of very light, light, medium, dark, and very dark tones. Texture, on the other hand, focuses on the visual uniformity of a feature. For example, the texture of the river was very fine since it was a uniform body of water, which is in contrast to the subdivision of homes that was described as very coarse due to the multiple homes in one area, differing from other objects in that location. The best texture for me was the mottled texture, which was somewhat coarse in nature but composed of different shapes and sizes that contributed to that “mottled” effect.

Exercise 1 Map - Tones and Textures in an Aerial Photograph 


Exercise 2:

A: The objective of Exercise 2 was to use visual cues and attributes to identify features on an aerial photograph. At first, this seemed like a daunting task, but once I began, everything fell into place. There were five different attributes that help an analyst identify features in aerial photography: shape/size, pattern, shadows, and association. Shape and size were the easiest to find; I was able to identify a car, a pool, and a house immediately. Pattern was somewhat instinctual as well; I used the dotted line pattern to identify main roads and parking lines to identify parking lots. Shadows were also easy to grasp, as they can help distinguish features when their identity is unclear. Lastly, and perhaps my favorite, was the association feature. Things that are alike or related are likely to be found in proximity, so using this as a contextual clue provides further insight. The example I chose for association was the condominium association. The clear indicators of this facility were the parking spots in front, the large building that was obviously bigger than a home, and, most notably, the pool in the back, which was attached to the building. This indicated it was a condominium community pool.

Exercise 2 Map: Identifying Features on an Aerial Photograph


Exercise 3:

The objective of Exercise 3 was to observe the difference between a true color image, which represents the colors we see with our eyes, and a false color (infrared) photograph. False color infrared is key in distinguishing variations of green. While the human eye may perceive one color, the NIR spectrum reveals different color variations that can provide more insight into plant health. I accomplished this objective, as the five features I selected matched the expected colors and shades for true color versus false color photography.


Notes Regarding North Arrow and Scale:

It is not advisable to add a north arrow or a scale bar to an aerial image. The reason for not including a north arrow is that we do not know the rotation or orientation of a photo, so it is not a good idea to assume north. As for the scale, it will vary due to the altitude and angle of capture of the image; therefore, the scale may introduce inaccuracies for the viewer. Fortunately, most aerial photographs come with metadata that provides crucial information, including scale and orientation.

Tuesday, June 24, 2025

Module 6- Working with Geometries

 


This last week of GIS programming brought working with geometries.  We began by creating a search cursor and for loops to iterate over the geometry of shapefiles. Then we used loops to iterate over features/row, an array, and then points to copy them to a text file for said shapefile.

After iterating over rows using the get part method, for loops, and searching for cursors to retrieve geometry points for the river’s shapefile, we copied it to a text file. In the screenshot of the text file below, you can see each of the geometry object ID’s, the Vertex ID’s, XY coordinates, and stream names that resulted from this process.


            

Figure 1Text Script for River_ZM38

 

The previous steps we have been exposed to, importing modules, setting up an environment, and a workspace, have all become second nature. The creation of for loops has become easier to grasp. However, this week, we were introduced to the concept of “nested loops,” which is a loop within a loop and allows you to access information within features. For example, to access the specific points in our river’s shapefile, we had to access each individual row in that feature using a for loop, and iterate again over each point in the row to get each X, Y coordinate. This concept was easy to understand; however, the use of the get Part() method initially retrieved an error stating that “part” was not defined. However, after attending office hours, I realized that it was not working because I had the vertex_ID in the wrong location, and I had an extra for loop that was not supposed to be there.

 

The final hurdle was copying the entire text file. At first, only one line of text was copied into the text file. After seeking guidance, I learned that this was due to improperly indexing the items and adding them together as strings.

The flowchart denoting the process of this script is shown below.

 

Figure 2 Flowchart for Mod6Script_Pousa




Overall, although coding can be challenging, this class has demonstrated that code will save us time and energy in any of our future GIS endeavors.

 

Wednesday, June 18, 2025

Module 5- Exploring and Manipulating Data



Figures 1-4: Creating a File geodatabase and copying data into it.








Flow chart denoting the cradle to grave process of creating a file geodatabase, copying data, and using cursors to retrieve and populate a data dictionary.


This week, we dove into manipulating data using Python. We learned the ease of copying data to geodatabases, creating a search cursor to extract data from said geodatabase, and then creating and populating an empty dictionary.

 

Steps 1-4

 

The first four steps for creating this script were straightforward and had no issues copying the data into the geodatabase. The issue I ran into was when I went back in to add print statements in the for loop for each feature class, I accidentally had all the print statements say “create feature class” instead of listing the desired fields. It was not an easy fix. I had to restart Arc GIS and hope for the best.

 

For Step 3: When using the search cursor, the delimiter function needed to be used. This is what caused the most confusion because I thought I could just use cursor = arcpy.SearchCursor(fc) the and get the results back. But then I realized I had to use the delimitedField = arcpy.AddFieldDelimiters() and this narrowed down the search to the FEATURE class.

Finally, I had to ensure to turn the population field, which was an integer, to a string in the get value field because Python cannot concatenate integers, only strings.

 Step 5

Creating the dictionary deemed to be the most difficult task of all. I could not figure out why the for loop and the chosen function to add pairs of items to a dictionary was printing out an empty dictionary. Finally, after meeting at office hours, I realized that I had to create a second search cursor for the dictionary creation and add the population plus the city names as parameters. Additionally, although dictionaries cannot be indexed, you do need to start as row [0] to start populating values in. This was a key part of the dictionary’s creation.

 


Module 5 - Unsupervised and Supervised Image Classification

  Figure 1: Map of Current Land use in Germantown Maryland Exercise 1: This exercise focused on completing unsupervised classifications . In...