Skip to Main Content

AI Literacy Framework

An attempt to create a framework for how we, at HIU, might approach teaching AI to our students so that they might be prepared for the workplace upon graduation.

Comments

AI has been disruptive to higher education. It has been abused by students and has been a bane for faculty. This guide is intended to be a framework for AI literacy. I am trying to create a place where we collect all that ought to be taught to our students about the nature of AI and how to use it. Our students are going to enter the workforce soon and they need to be prepared. Various reports state that the majority of jobs will be disrupted by AI or require that employees are proficient in AI use. We need to educate them or else we are doing them a disservice.

Fall 2024 - Agency and "Reason"

Over the summer of 2024 and into the Fall many of the frontier AIs began announcing various aspects of agency. Previously, these AIs were not just trained, but assumed to operate by writing fluently, not necessarily factually. Now, they are being used as products to operate in ways beyond a "chatbot"; they are now "programmable". The different companies are calling it different things, but "co-pilots", "agents", "artifacts", "custom GPTs" are all programmable products. At first these were just fancy uses of the AI and used within the chatbot structure/system. They were then loosed upon the internet, able to search for answers and gather information. "Computer Use" by Claude can now operate your computer if given the right prompts; it can open files and delete them.

"Reasoning" is something that OpenAI has been throwing around, pretending that its AI, ChatGPT4 (strawberry), is capable of logic and reasoning. True reasoning requires logic and the current batch of LLMs are not capable; they generate answers based on probability, not logic. Instead of logic or reasoning they have encouraged their AI to break a prompt down into parts, and then, using chain of thought, answer the prompt piece by piece or step by step. By doing that, the answers are more correct, but are merely the illusion of reason.

update Feb 2025 - DeepSeek, a Chinese AI, now does reasoning better than ChatGPT. Instead of programming logic, it added a step in training the AI that makes it better at reasoning. They added a step in training in which the AI ingests 800,000 prompts that are broken down into logical steps; they were trying to train the AI to recognize how to do logic. It works pretty well. And because this is done within the programming, it doesn't require the prompt to be answered a dozen times by the AI trying to match the answer to each part that it cut up. It uses less electricity and time. The great thing is that this training method was released to everyone. By mid summer most frontier AIs will have reasonable "reasoning", but still no real logic.

August 2025 - Yup, "reasoning" is now a part of most of the frontier AIs. The major AIs have released updates / newer models and are capable of "reasoning". No, we are not headed toward AGI (Artificial General Intelligence) and I don't think LLMs are going to get us there. True AGI will require logic as part of its essential structure and base programming.

Other HIU Guides About AI