Sunday, March 15, 2026

Module One- Introduction to Cartography and Map Design

Figure 1- Well Designed Map - SC Wildlife Zones



The map I selected is the Wildlife Map from the South Carolina Department of Natural Resources, found in the R‑Drive. I chose it because, despite a few areas that could be improved, it is simple, easy to understand, and communicates its purpose effectively.

Purpose, Look, and Audience

The map’s purpose is to show the different game zones used for wildlife management in South Carolina, which is made clear through the title and legend. Its look and feel are simple, practical, and informative, presenting a lot of information in a way that anyone can quickly interpret. The intended audience is the general public—people interested in hunting, wildlife conservation, or anything requiring awareness of the state’s game zones. The educational level needed is minimal since the zones are color‑coded and clearly labeled.

Cartographic Design

The map emphasizes the main theme through its use of color. Only the game zones are colored, which makes them stand out against the white background. The symbology is simple but effective: each zone is represented by a different color, outlined with thick borders, and labeled with a number inside the zone. This makes it easy to identify each area without relying heavily on the legend.

Improvements: Some colors are too similar—especially between Zones 1 and 6, and Zones 2 and 5—which may confuse some viewers. Blue and lilac are also not ideal choices for land areas. A neutral background color instead of white would give the map a more finished look.

The symbols and labels are mostly legible, except for a few city names. The symbols are intuitive, and the thick borders and centered numbers make the zones easy to understand. The map also uses graphics and text blocks appropriately, including the official state seal and important information like the source and publication date.

Map Elements and Layout

The map is generally well balanced, and the creator used the landscape layout effectively to fit the shape of the state. The state fills most of the page, and the north arrow, scale bar, and text boxes are placed in a way that doesn’t distract from the main map. The legend is close to the state and sized appropriately. The only change I would make is swapping the north arrow and scale bar with the seal and text boxes to reduce clutter in the bottom‑left corner.

The map has appropriate borders, though slightly thicker ones would give it a more polished look.

Scale and Legend

The map extent is appropriate because it shows the entire state and includes enough detail to display the cities within each zone. The scale bar is simple, uses miles, and is placed near the state for easy reference.

The legend includes all necessary symbols and details and is organized logically from Zone 1 to Zone 6. The labels are clear, though adding “Game” before each zone name is redundant since the legend title already states that these are game zones.

Titles and Subtitles

The title is brief, descriptive, and clearly communicates the map’s purpose. It is the largest text on the page and is positioned well in the open space near the top right. The subtitles are smaller, non‑distracting, and easy to read.





Figure 2 - Poor Map Choice: "Bellevue"


 For this assignment, I chose a map from the R‑Drive that wasn’t obviously terrible at first glance. I wanted to challenge myself with something that looked acceptable to a non‑GIS viewer but still had real design issues. I even asked a non‑GIS friend to look at it, and her impressions matched mine.

Purpose, Look, and Audience

The map’s purpose isn’t clearly communicated. Although the legend suggests it is meant to show public facilities across Bellevue, the title doesn’t say this, and the layout doesn’t help clarify it. The overall look and feel of the map is overwhelming—there is so much going on visually that instead of helping the viewer understand where key facilities are, it creates confusion.

The intended audience seems to be everyday residents or visitors who need general location information, assuming they can read basic directional cues.

Cartographic Design

Some visual themes are emphasized well, such as the contrast between urban areas, green spaces, and surrounding water. These help give a sense of the city’s layout. But the map is so cluttered that even these distinctions start to blur.

The symbology for public facilities is mostly effective. Points are used for specific locations, and color‑coded areas represent broader features like parks. However, some color‑coded areas—like the dark and light gray regions—aren’t explained in the legend, leaving the viewer to guess their meaning.

The color scheme generally works: it’s earthy, not distracting, and highways stand out clearly from smaller streets. But the labels and symbols are hard to read because the font is tiny and the map is visually crowded. Some symbol colors blend into the background, making them difficult to distinguish.

The symbols themselves are intuitive—fire stations in red, schools with a flag symbol, police stations in blue, parks in green, water in blue. There are no extra graphics, but there is a text box in the bottom right corner that is nearly unreadable because it clashes with the map.

Map Elements and Layout

The page layout feels unbalanced. The title and legend are both on the left, making that side feel heavy, while the logo and text box on the right are too small to balance it out. The borders are inconsistent as well, with the bottom border thicker than the others.

Most map elements support the map’s goals, but the scale bar does not. It is shown in feet, which is unusual for a city map, and it is placed awkwardly under the north arrow, making it easy to miss. The north arrow itself is too large and ornate. Both elements could have been placed more thoughtfully.

Scale and Legend

The map extent is reasonable because it includes the whole city, though some might prefer a closer view that excludes surrounding water. Including the water does help show the city’s context, so either choice could be justified.

The legend is only partially complete. Several layers, like water, gray areas, purple areas, and smaller streets, are missing. The structure of the legend is logical, though, and the labels themselves are intuitive.

Titles and Subtitles

The title is not descriptive enough and doesn’t explain the map’s purpose. It is also too small, and the subtitles are even smaller and nearly impossible to read.

Saturday, March 7, 2026

Story Map About Me!

 Hello everyone!

I’m Zenia, and I couldn’t be more excited to be here today!

I’m a grad student pursuing my GIS analyst degree, and my undergraduate degree is in Environmental Sciences from Grand Canyon University. School is one of my favorite things in life (weird, I know) but I genuinely enjoy learning new skills and exploring parts of the academic world I haven’t experienced before.

I fell in love with GIS during my undergrad and was thrilled to discover that UWF offered a master’s program. At the end of the program, I hope to secure a GIS job in the public sector and bring some much-needed skills. Once I complete my master’s, I hope to continue on to my PhD where I will use GIS as an additional tool to help government plan and support the most important life sustaining operations, agriculture.

If there were an easy way to describe me, it would be “different” … ha-ha. I bounce between being very extroverted and friendly in public, yet incredibly introverted at home and in my personal life. I enjoy nature, painting, reading, singing, and dancing.

I also feel incredibly lucky to be alive and grateful for the opportunities I’ve been given.

Check out my story map for more about me :)

Story Map Link - https://arcg.is/11bnfW5




Tuesday, November 25, 2025

Module 5 - Unsupervised and Supervised Image Classification

 

Figure 1: Map of Current Land use in Germantown Maryland

Exercise 1:

This exercise focused on completing unsupervised classifications. In supervised classification, there is a training phase where pixels from known classes are used to inform the classification process. The software groups pixels based on their spectral characteristics, and at the end of the process, the user classifies the grouped classes. The accuracy of the classification is influenced by maximum iteration settings and the convergence threshold. The iterations enable repeated analysis of the area, while the convergence threshold determines how confident the software is that the pixels are accurately classified. Additionally, the skip factor governs how pixel analysis is conducted and affects processing time. For example, a skip factor of 1 analyzes pixels one by one, whereas a skip factor of 2 means that only every other pixel will be analyzed.

The most challenging aspect of this exercise was selecting the appropriate training pixels and ensuring they were from the correct areas. This led to issues later when random pixels from incorrect areas—denoted as "mixed"—appeared in blatantly incorrect locations.

We also explored different comparison methods for analyzing the original and reclassified images using the toggle, flicker, blend, and highlight tools. These tools help identify areas that might have been misclassified.

The best tool introduced in this lab was the record tool, which allows us to combine multiple classes into fewer classes. For example, we took the UWF50 image and reduced it from 50 classes to 8 classes.



Exercise 2:

Exercise Two focuses on supervised classification, where the analyst trains the software to select classes based on pixel values and their surrounding neighborhoods. We were introduced to the Signature Editor, utilizing the polygon tool to select pixels in areas of interest, or we could use the AOI Seed Tool to expand a region around areas of known land cover.

Before adding signatures, we needed to create an Area of Interest (AOI) layer, allowing us to conduct inquiries on this layer to specify where signatures would be added. To create this AOI, we used the Enquire Tool along with known coordinates. While we had coordinates for most of our selected classes, we had to identify our water and road features without specific coordinates. Initially, this was challenging, but I realized I could move the cursor and adjust the Enquire Tool to target the road and water features for the Grow Tool.

I attempted this three times to achieve a satisfactory classification and found that the AOI Seed Tool was the most effective for this task, as it captured the pixels with similar spectral values better than my polygon drawing skills.

Another important skill gained from this exercise was analyzing histogram plots and mean plots to minimize spectral confusion by identifying bands with the least separation between signatures.

We then proceeded to classify the images using Maximum Likelihood Classification, a parametric method based on the probability that a pixel belongs to a specific class, as it computes the likelihood of a pixel corresponding to a particular spectral signature.

Next, we created a Distance File that calculated the spectral Euclidean distance. In this file, brighter pixels indicated a higher probability of misclassification. Once we analyzed, confirmed, and refined our results, we merged the multiple classes using the Record Tool, just like in Exercise 1. Finally, we used the Calculate Area Tool to determine how much of the area was affected.


Sunday, November 16, 2025

Module 4 - Spatial Enhancement, Multispectral Data and Band Indices

In this week's lab, we encountered four tools that will be key in helping identify different features during image analysis: Histogram Analysis, Image Grayscale Analysis, Multi-spectral Band Experiments/Analysis, and Image Brightness Analysis. To practice this, we were tasked with finding features that fit certain criteria using the aforementioned methods. Furthermore, once those features were selected, we needed to choose multispectral band combinations that helped distinguish these features on the map. Below are the maps generated for each of those features.
Figure 1- Identification of water features. 

Feature 1: WATER

For identifying the feature in Layer_4, there is a spike between pixel values of 12 and 18. This is quite straightforward: the larger the feature, the larger the spike in the histogram due to the high concentration of pixels at that brightness level. Additionally, the fact that it is on the left side of the histogram indicates that these are dark features. The large dark body of water explains this spectral signature in the histogram. 

To highlight this feature in the image, the chosen filter was False Color Infrared (bands 4/3/2). This is because water appears dark, creating a stark contrast with the surrounding vegetation. Another acceptable band combination would have been False Natural Color (bands 5/4/3), where the dark blue water would be greatly contrasted against its green surroundings.



Figure 2 - Identification of Snow on the Mountaintops

Feature 2 : SNOW: 


The features that represent both A) a small spike in layers 1-4 around pixel value 200, and B) a large spike between pixel values 9 and 11 in Layer_5 and Layer_6, were deduced to be snow on the mountaintops.

The small spike in layers 1-4 around pixel value 200 represents a small amount of pixels at a high brightness level on the left side of the histogram. These correspond to the small caps of bright snow on the mountaintop. As for the large spikes between values 9 and 11, these represent the dark areas of the mountain surrounding the snow, which outnumber the amount of snow on the mountaintop.

To highlight this feature, the TM True Color combination of bands 3/2/1 was used, as the bright white snow highly contrasted with its dark mountain top surroundings.

Figure 3- Identification of varying water depth. 

Feature 3: Varying Water Depths


To represent the gradient in color of water in relation to its depth, the shallower the water, the lighter the color, and the more noticeable the changes in brightness. Once the water becomes deeper, which is the case for the vast majority of the water features in this image, the bands remain the same color as they correspond to those darker values.

To highlight these features, a custom band combination of 5/2/1, where one can distinctly see the depth dependent gradient from the lighter to the darker water features, was selected.  This combination also somewhat neutralized features surrounding the water and contrasted them at the same time.





Sunday, November 9, 2025

Module 3 - Intro to ERDAS Imagine and Digital Imaging

 

Map of Features observed from LANDSAT Satellite


This week's lab introduced us to ERDAS Imagine, which helps us analyze different types of satellite and aerial images. There are multiple concepts to consider when using various types of images for analysis, such as spatial resolution, pixel size, radiometric resolution, and temporal resolution.

Additionally, learning how to navigate the tools in ERDAS was key to this week's assignment. This included adding images, selecting different views, adjusting the appropriate display settings, navigating individual images, and manipulating spectral band combinations to identify specific elements in an image. Furthermore, we learned how to create a map using an ERDAS image and how to use the INQUIRE box feature to select a smaller section within a larger image.

Finally, we learned about raw/multiple layer continuous data, single-layer panchromatic continuous data, and categorical/single-layer thematic data.



One of the biggest takeaways from this week’s lab is that our eyes can be misleading, which is why we must always rely on scientific methods, processes, and tools. When comparing the radiometric resolution of RRC and RRD, the visual differences may seem minimal. However, using the highly sensitive ERDAS software reveals a significant contrast between Image C (4-bit) and Image D (8-bit). In fact, Image D is 16 times more detailed and accurate during analysis.


This week's lab introduced us to ERDAS Imagine, which helps us analyze different types of satellite and aerial images. There are multiple concepts to consider when using various types of images for analysis, such as spatial resolution, pixel size, radiometric resolution, and temporal resolution.

Additionally, learning how to navigate the tools in ERDAS was key to this week's assignment. This included adding images, selecting different views, adjusting the appropriate display settings, navigating individual images, and manipulating spectral band combinations to identify specific elements in an image. Furthermore, we learned how to create a map using an ERDAS image and how to use the INQUIRE box feature to select a smaller section within a larger image.

Finally, we learned about raw/multiple layer continuous data, single-layer panchromatic continuous data, and categorical/single-layer thematic data.



One of the biggest takeaways from this week’s lab is that our eyes can be misleading, which is why we must always rely on scientific methods, processes, and tools. When comparing the radiometric resolution of RRC and RRD, the visual differences may seem minimal. However, using the highly sensitive ERDAS software reveals a significant contrast between Image C (4-bit) and Image D (8-bit). In fact, Image D is 16 times more detailed and accurate during analysis.

Tuesday, November 4, 2025

Module 2: LULC Classification and Ground Truthing

 

Final LULC Classification Map for Pascagoula, Mississippi


This week’s lab was a whirlwind of both excitement and frustration! We learned about the concepts of Land Use Land Cover (LULC) classifications, which are important tools for understanding areas that have been photographed. Land use and land cover maps help governments, scientists, and others understand the biological and structural composition of areas on Earth. Remote sensing allows us to collect data in remote and hard-to-access locations while being unobtrusive to the geographic phenomena being observed. Land cover describes the biophysical features of the Earth’s surface, while land use refers to how humans shape the land for their purposes. There are multiple levels of land cover classifications, denoted by numerical values; the higher the numerical value, the more detailed and specific the use or cover. In this lab, we focused on the broader classifications 1 and 2. For example, a level 1 classification would be "water," however, this category is further subdivided into “rivers and canals,” lakes, etc. If we were to go into further detailed classifications, we would differentiate between trophic and eutrophic lakes, for example. As you can see, these classifications are essential for understanding different geographic areas.

To classify these areas, we created a polygon feature class for the LULC classifications, drew the polygons over the areas deemed a certain classification, and labeled them accordingly in the attribute table. An important distinction to make is between the minimum mapping unit (MMU) and map scale, which are closely related. The MMU is the smallest feature that can be reliably seen on a map, while scale dictates that minimum mapping unit by determining the ratio of a distance on a map to the distance on the ground. The map scale used in this lab was 1:5,000. This scale was large enough to create the polygons efficiently but small enough to distinguish features on the map, such as shapes, textures, and tones. Creating these polygons was time-consuming and extensive; one has to pay attention to detail. However, ArcGIS Pro's features, like autocomplete polygons and edge/vector snapping, were key in ensuring that the polygons were neat and fit together properly. In a way, we created a giant jigsaw puzzle that represented the different land use classifications, up to level 2, for Pascagoula, Mississippi. Once completed, we ensured that we used "unique values" in the symbology for the polygons to make different land use classes distinct from one another. I ensured that the general level 1 classifications had similar gradient values for their corresponding level 2 classifications to make it easy for the reader to distinguish.

Ultimately, we must double-check our work whenever we present results. The same applies to remote sensing; we want people to be confident in our findings, and therefore we validate. There are three types of accuracy assessments used in remote sensing: overall accuracy, producer's accuracy, and user's accuracy. User's accuracy takes the user's perspective to determine the probability that a classification on the map represents what is on the ground. Producer's accuracy assesses how correctly the mapmaker classified something on the ground. For this lab, we used "overall accuracy." This is the simplest of the methods, where the number of correctly classified sites is divided by the total number of items and multiplied by 100 to express it as a percentage. This method is not without some drawbacks, including not accounting for accuracy among individual classes. This can be an issue if a single class is dominant in an area compared to other classes, which might have occurred during this lab.

Additionally, there are different sampling methods. The selected method was random sampling. This was because it provided an easy, non-biased sample selection for validation, making the process easier, more valid, and offering broader representation of the extent. Since we cannot collect in situ data, we used Google Maps, specifically the satellite view and street mode, to validate our data most closely. We created a feature class for truthing. Using the random sampling mentioned above, we went to the points on Google Maps that corresponded to our sampling locations. A drawback to this method was that there were a large number of points in land classes 51 and 61. Although these were valid sampling sites, they were the simplest areas to classify due to clear distinctions based on color, shape, size, and texture. I believe this might have skewed the accuracy of my results. Had more random sites been selected in the urban built-up area, I believe the accuracy might have been lower.

The calculated overall accuracy percentage is 26 correctly classified sites out of 30 total sites, which equals 86%.


Thursday, October 30, 2025

Module 1 - Visual Interpretation

 

Exercise 1:

 The objective of Exercise 1 was to interpret and distinguish features in an aerial photograph based on tone and texture. Tone refers to the uniformity and intensity of the coloration of a feature. We distinguished between features of very light, light, medium, dark, and very dark tones. Texture, on the other hand, focuses on the visual uniformity of a feature. For example, the texture of the river was very fine since it was a uniform body of water, which is in contrast to the subdivision of homes that was described as very coarse due to the multiple homes in one area, differing from other objects in that location. The best texture for me was the mottled texture, which was somewhat coarse in nature but composed of different shapes and sizes that contributed to that “mottled” effect.

Exercise 1 Map - Tones and Textures in an Aerial Photograph 


Exercise 2:

A: The objective of Exercise 2 was to use visual cues and attributes to identify features on an aerial photograph. At first, this seemed like a daunting task, but once I began, everything fell into place. There were five different attributes that help an analyst identify features in aerial photography: shape/size, pattern, shadows, and association. Shape and size were the easiest to find; I was able to identify a car, a pool, and a house immediately. Pattern was somewhat instinctual as well; I used the dotted line pattern to identify main roads and parking lines to identify parking lots. Shadows were also easy to grasp, as they can help distinguish features when their identity is unclear. Lastly, and perhaps my favorite, was the association feature. Things that are alike or related are likely to be found in proximity, so using this as a contextual clue provides further insight. The example I chose for association was the condominium association. The clear indicators of this facility were the parking spots in front, the large building that was obviously bigger than a home, and, most notably, the pool in the back, which was attached to the building. This indicated it was a condominium community pool.

Exercise 2 Map: Identifying Features on an Aerial Photograph


Exercise 3:

The objective of Exercise 3 was to observe the difference between a true color image, which represents the colors we see with our eyes, and a false color (infrared) photograph. False color infrared is key in distinguishing variations of green. While the human eye may perceive one color, the NIR spectrum reveals different color variations that can provide more insight into plant health. I accomplished this objective, as the five features I selected matched the expected colors and shades for true color versus false color photography.


Notes Regarding North Arrow and Scale:

It is not advisable to add a north arrow or a scale bar to an aerial image. The reason for not including a north arrow is that we do not know the rotation or orientation of a photo, so it is not a good idea to assume north. As for the scale, it will vary due to the altitude and angle of capture of the image; therefore, the scale may introduce inaccuracies for the viewer. Fortunately, most aerial photographs come with metadata that provides crucial information, including scale and orientation.

Module One- Introduction to Cartography and Map Design

Figure 1- Well Designed Map - SC Wildlife Zones The map I selected is the Wildlife Map from the South Carolina Department of Natural Resourc...