Beyond Tomorrow: Adapting to the Unfolding AI Era 1Navigating the AI Era: Opportunities, Challenges, and Ethical Considerations

By: Paula Aidoo
In the pages of history, whispers of artificial intelligence (AI) have captivated our imagination. Long before the 21st century dawned, visionaries envisioned machines that could think and learn like humans. Now, the prospect of having an AI companion by your side is no longer just a dream; it’s becoming a reality.
“AI + humans” – that's the tagline for the future of work. But here's the kicker: You're torn between feeling worried, excited, or just plain terrified about what lies ahead.
The future holds exciting possibilities, but with hurdles to overcome. AI is revolutionizing how we work in numerous ways. Imagine AI helping you brainstorm content ideas for your side hustle or social media page, generating fresh posts and captions in seconds. Or think about an AI-powered app that optimizes your daily schedule, pinpointing the best times to study based on your classes, and navigating the fastest subway route to campus—accounting for any delays along the way.
There are many reasons students should be on the lookout for AI; let’s explore its potential benefits and how it can shape your academic and professional journey.
AI isn’t perfect
Welch Labs. AI Can't Cross This Line and We Don't Know Why. YouTube, uploaded by Welch Labs
AI is advancing rapidly, but it's important to recognize its limitations. One concept that illustrates this is the 'Efficient Compute Frontier.' This refers to the point at which increasing computing power yields diminishing returns on AI model performance, suggesting that AI will never be a one-stop shop for all of life’s inquiries.
AI hallucinations are also a significant issue. In a Stanford study, AI models used for legal cases were shown to hallucinate 17% to 33% of the time. That’s a lot of made-up cases (Magesh et al., 2024).
AI can never be perfect. While innovations will shift the frontier of AI capabilities, the true value of AI depends on how humans interpret and apply its capabilities. And there are a lot of applications.
The AI craze
Businesses are striving to apply AI in every crevice possible. AI adoption is accelerating, with revenues from AI businesses expected to reach $190 billion by 2025. By 2030, AI technology is projected to add $15.7 trillion to global gross domestic product (GDP) per an article by Weforum. "Meta is spending billions of dollars on Nvidia’s popular computer chips, which are at the heart of artificial intelligence research and projects," alongside other organizations' initiatives (Vanian, J., 2024). 
However, these optimistic visions of the future can only be realized through collaboration and proactive measures. If not, it can lead to terrible consequences, which many Americans fear.   
The Dangers of AI
The fear of AI is justified. It can lead to algorithmic biases and job displacement. According to Tech Monitor, Dell is expected to cut 12,500 jobs, largely influenced by its focus on AI. Additionally, AI raises ethical concerns over autonomy and ownership, and it could be misused or weaponized for malicious purposes.
So, why should students pay attention to AI?
By understanding AI concepts and acquiring relevant skills, you'll not only be better prepared for the future job market but also positioned to thrive in a world where AI integrates into every aspect of our lives and work. 
Career-wise, students with AI experience are much more likely to get hired and will be more resilient to AI job displacement, meaning more job security.
It's well known that AI can help you be more productive. "A new study on the impact of generative AI on highly skilled workers finds that when artificial intelligence is used within the boundary of its capabilities, it can improve a worker’s performance by nearly 40% compared with workers who don’t use it" (Somers, 2023).  This means that a task that typically takes 4 hours could potentially be completed in about 2.4 hours. "However, when AI is used outside that boundary to complete a task, worker performance drops by an average of 19 percentage points"; underscoring the importance of human oversight (Somers, 2023). 
In the near term, AI can further explain course topics, help you brainstorm ideas for assignments, and be a study partner for upcoming exams. The possibilities are endless.
Food for thought
"AI is no longer some futuristic idea; it’s already being integrated into every aspect of our lives and in every industry, from healthcare and education to finance and travel. The steps we take today — in terms of where we apply AI, who participates in creating it, who can access it, and how informed we all are about its impact on our daily lives — will play an important part in shaping the future of our society. Now is the time for all of us to become AI literate." 
"As individuals more deeply embrace these technologies to augment, improve, and streamline their lives, they are continuously invited to outsource more decision-making and personal autonomy to digital tools”(Anderson, J., & Rainie, L., 2023).
Think about the skills you'll need to succeed in an AI-driven world. It's not just about technical expertise; it's also about adaptability, critical thinking, and creativity.
We stand at the threshold of an uncertain future. The key to navigating the AI era lies in adaptation. To thrive in this dynamic landscape, individuals must cultivate a mindset of continuous learning and agility. 
In the job market, employers are seeking individuals who can leverage AI to solve complex problems, innovate, and drive business growth. 
"It should be noted that in explaining their answers, most of these experts agreed that the future of digital systems is likely to hold both positive and negative consequences for human agency. Most of the overall expert group also agreed that the current moment is a turning point that will determine a great deal about the authority, autonomy, and agency of humans as the use of digital technology spreads inexorably into more aspects of daily life" (Anderson, J., & Rainie, L., 2023).
Together, let's harness the challenges and opportunities of the AI era, while also being mindful of its potential risks. Our actions today will shape the world of tomorrow.

Part 2
What is AI, and How Does it Work?
At its core, AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Unlike traditional computer programs, AI systems can analyze vast amounts of data, learn from patterns, and make decisions like humans.
AI operates on the principle of processing vast amounts of data, recognizing patterns, and making decisions based on learned insights. Machine Learning (ML), a subset of AI, facilitates this by enabling algorithms to gradually improve their performance without needing humans to explicitly tell them how to do so.
Picture by Josef Steppan. Used under a CC BY-SA 4.0 license.
For example, imagine we have a dataset of handwritten digits (0-9) and want to build an AI model that can recognize and classify them. The algorithm starts with little knowledge and randomly assigns importance to different features of the digits, such as the presence of curves or straight lines. As the algorithm processes more examples of handwritten digits, it adjusts these weights based on whether its predictions match the actual labels associated with each digit.
Through programs like neural networks in Deep Learning, AI systems can comprehend complex relationships and make predictions or decisions with great accuracy. For instance, the neural network might learn that a circle-like shape followed by a straight line is characteristic of the digit "9," while a closed loop is more indicative of the digit "8." As the algorithm is exposed to more examples and receives feedback on its predictions, it refines its model, becoming increasingly accurate at recognizing handwritten digits. It may have some imperfections, but it becomes increasingly accurate over time.
To help you better understand the dynamic and, at times, confusing world of artificial intelligence, we’ve put together several terminologies that are essential in the AI conversation.
Havi. "Algorithm in Everyday Life." Havi
Algorithms: In the context of AI, algorithms refer to step-by-step procedures or sets of rules that AI systems follow to perform specific tasks. These tasks can range from simple calculations to complex decision-making processes. Algorithms are fundamental to AI development as they enable machines to process data, learn from patterns, and make decisions autonomously. Understanding algorithms is essential for students and early workers learning about AI, as they form the building blocks of AI systems and are integral to their functionality and performance.
AlphaxSalt. "Machine Learning Basics." Medium
Machine Learning (ML): ML is a subset of AI that focuses on enabling machines to learn from data without being explicitly programmed. Through algorithms and statistical models, ML algorithms can improve their performance over time as they are exposed to more data.
Mitchell, Bradley. "What Is a Neural Network?" Lifewire
Neural Network: An artificial neural network is a type of computer program that mimics how biological neural networks in the human brain process information. It helps machines learn from data and make decisions or predictions. The network learns by adjusting its weights and biases based on the difference between its predictions and the correct answers. This process involves running many examples through the network and gradually improving its performance. 
GeeksforGeeks. "Introduction to Deep Learning." GeeksforGeeks
Deep Learning: Deep learning is an advanced subset of machine learning (ML) that draws inspiration from the intricate workings of the human brain's neural networks. Unlike traditional ML methods, deep learning entails more complex algorithms that mimic the layered structure of neural networks, enabling it to process and analyze vast amounts of data with exceptional accuracy and efficiency. This sophisticated approach has propelled significant advancements in various fields, such as image recognition, natural language processing, and the development of autonomous vehicles.
"Natural Language Processing Techniques." Revolve AI
Natural Language Processing (NLP): NLP allows computers to understand, interpret, and generate human language. It powers virtual assistants like Siri and chatbots, enabling seamless interaction between humans and machines.
Choudhary, Pranjal. "Everything You Ever Wanted to Know About Computer Vision: Here’s a Look at Why It’s So Awesome." Towards Data Science
Computer Vision: This branch of AI focuses on enabling computers to interpret and understand visual information, such as images and videos, from the real world. Applications range from facial recognition to medical image analysis.
ChatGPT: ChatGPT is an AI-powered tool, among many others, that utilizes Natural Language Processing (NLP) techniques to generate human-like text responses. These systems are trained on large datasets of human conversations and can understand and conversationally respond to text input.
Hallucination/Delusional: In the context of AI, hallucination or delusional behavior refers to the phenomenon where AI-generated outputs contain unrealistic, nonsensical, or consistently erroneous elements. This can occur due to flaws in the AI's programming or data inputs, leading to deviations from expected or logical content. 
What are AI Biases: 
AI biases can stem from training data reflecting historical inequalities, unrepresentative samples, or skewed information across demographics, locations, and contexts. For instance, an AI system used for predicting stock market trends might show biases if trained predominantly on data from certain industries, leading to inaccurate predictions for other sectors. Additionally, biases can arise from the algorithms themselves, such as through the selection of biased features or performance issues specific to certain scenarios. AI biases can lead to unreliable outcomes, loss of trust in the system, and ethical concerns.
Benefits of AI
The potential benefits of AI across sectors such as healthcare, business, and education are significant. In addition to AI's promise to revolutionize these sectors with enhanced efficiency, accuracy, and innovation, its biases and hallucinations may result in irrational outcomes and inaccurate outputs. Therefore, it's imperative to navigate these challenges responsibly to maximize the positive impact of AI on society. It's essential for students to stay updated on recent advancements and breakthroughs in technology.
In 2024, recent developments have seen significant progress in areas such as AI ethics, explainability, and interpretability. Researchers and practitioners are increasingly focusing on developing AI systems that are aligned with ethical principles. Moreover, breakthroughs in deep learning techniques and computational power continue to push AI capabilities, leading to exciting applications in fields like autonomous vehicles, natural language understanding, and personalized medicine. By staying informed about these advancements, we can better prepare to contribute to the responsible development and deployment of AI in the future.

HELPFUL RESOURCES YOU SHOULD CHECK OUT:
What is Artificial Intelligence? | Artificial Intelligence In 5 Minutes | AI Explained | Simplilearn
"How AI Works." Khan Academy
Harvard CS50’s Artificial Intelligence with Python – Full University Course
ARTICLES:
"Algorithm." BotPenguin
Next
Next

From Setbacks to Success: Lehman College and CUNY Reconnect Help Students Return and Thrive