The Philosophy of AI: How One CLA Student Bridges Technology and Ethics
Vedant Thakur arrived at Purdue University with a plan to study audio engineering technology. During his first semester, he took SCLA 101 with a philosophy professor. He describes the experience as a turning point that demonstrated everything the College of Liberal Arts (CLA) had to offer.
Thakur explored ideas with professors who taught him that “you’re okay to be wrong — this is my view, now push back.” By the end of the semester, he made the decision to transition fully into CLA, where he now pursues a Bachelor of Arts in Artificial Intelligence through Purdue’s philosophy department.
His coursework reflects both his technical interests and his passion for finding meaning. Rather than focusing solely on building AI systems, he asks what intelligence is and how we understand it.
“AI research started with understanding our own intelligence,” he explains. Through studies in philosophy of mind and cognitive science, he examines how human thinking translates — or fails to translate — into machines.
This approach has reshaped how Thakur sees artificial intelligence. “We have to ask what we really mean when we say AI,” he says. “Idea generation? Computation? A system?” By learning to break down arguments and consider theoretical ideas, he has developed the tools to inform how AI is governed and used.
Once a self-described skeptic, Thakur’s perspective on artificial intelligence demonstrates a multi-dimensional understanding of the ways the field has evolved over the past two years. “There are ways to use it to do better science, better data analysis,” he says. At the same time, he is critical of how AI is often deployed. “Generative AI art is plagiarism,” he argues, pointing to broader systemic concerns.
“Capitalism incentivizes people to prioritize productivity and efficiency,” Thakur says. His issue with AI is not the technology itself, but the ways and reasons people use it. He highlights its incredible potential to advance fields like medicine and scientific research but notes it is often overused for convenience or profit rather than meaningful impact.
This tension has inspired his work in AI policy and governance. Through involvement with responsibility-centered initiatives like GRAIL, he participates in discussions on cybersecurity and public attitudes. These experiences have shown him just how complex the landscape is, requiring overlap in the roles of government agencies and industries.
Thakur credits the College of Liberal Arts for transforming how he learns. “The classes focus more on skills instead of just content,” he says.
Courses like SCLA 101 and 102 have helped him develop critical communication skills, and his philosophy and political science classes emphasize analysis and collaboration over memorization. Access to resources like the Rosen Center for Advanced Computing has further expanded his opportunities, allowing him to work on meaningful projects and explore the technical side of AI at the graduate level.
Looking ahead, Thakur sees law school as a powerful avenue that could give him the qualifications and agency to make a difference. He hopes to explore legal technology solutions supporting fairer outcomes, such as using AI to analyze body camera footage and support public defenders.
At the same time, he remains deeply interested in building technology to find “the best, cleanest solution.” Whether through litigation or innovation, Thakur hopes to play a role in shaping how AI is used and ensuring it serves the public good.