The Use of AI and Technology

Previous Chapter

Short Answer

“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”

  • Roy Batty to Rick Deckard, Blade Runner (Scott, 1982)

AGI (Artificial general intelligence) is possible and AI may even replicate human intelligence and behaviors. It may take a while to get there and not be feasible with current technology though.

If you think you need AI (or a new technology) implemented into a business, organization, or similar group, there’s a high chance you actually need better processes (and/or people!) instead.

The problems you encounter don’t often change, but the means (and technology) to address those problems can often change.

Opinion: The public proliferation of AI in just about every field and occupation is the equivalent of letting a person with no training pilot an A10 Thunderbolt II from Fairchild Republic with a GAU-8/A Avenger from General Electric (or General Dynamics) strapped onto it.

  • In case someone from those companies is reading this: no, this isn’t an attack on you.

It’s overkill and not appropriate for many situations, costs a lot to use and deploy, and it is extremely easy to cause unintentional and collateral damage with if handled incorrectly, but it can neutralize problems. There are valuable uses for AI, but there’s just as many uses which aren’t practical, actually detrimental, or make little sense.

  • Automation in history is usually adopted because it was far more effective and efficient compared to current methods to solve a problem. If the invention isn’t both effective and efficient, it may not persist for long or cause detrimental damage to its users over time.
  • The biggest “hidden cost” is ambiguity and accidental complexity.
  • Other examples of overkill: giving frying pans and coffee machines firmware updates and AI.

As for technology, we have enough tools to do things we never could’ve done before in the past, but we also lose the nuance for how things work “under the hood.” There’s also a lot of ways to abuse, exploit, and use technology, despite whatever guardrails you may implement.

  • E.g. Chances are you probably don’t know how to change over half the settings on a smart phone do, even if you’re someone with one (almost) always on your person.

Long Answer

A Technology Overview in Education

Technology is permeating into classrooms at primary, secondary, tertiary, post-graduate, and adult education. This has accelerated greatly since COVID back in 2020. Schools may move towards near-full or full technological equivalents for resources like accessing textbooks and generating work for assignments.

  • Whether or not this becomes a system administrator’s nightmare in the future remains to be seen.
  • Though technology may intend to make education accessible, it can also make the education process longer and more cumbersome.

To give an example: a high school now provides every student with their own personal iPad (electronic tablet). It may have the following features:

  • Specifications to run school required software at acceptable parameters
  • An internet browser
  • Camera software
  • A monitoring software/setup in the backend so a teacher can monitor what’s on a student’s screen in the background during their specific class period
    • These softwares typically just project what’s actively showing on one screen to a dashboard/another screen
    • In this example, it’s also the school’s computer, not the student’s. The school may do this.
  • Proprietary security system
  • Email
  • Assignment and work software
  • Activity software
  • Storage and organization software
  • Classroom management software
  • “All-in-one” software to cover multiple use cases
  • Access to online textbooks
  • An insurance/warranty plan in case of damages

A student iPad can cover almost all use cases in a school setting and simplify the logistics of resource management. It’s also a trade-off on resources and reduction in capacity/lot-sizing; though you can now manage fewer physical resources, providing technology may require additional overhead like licensing, device management systems, connectivity, internet providers, and equity concerns.

Not all technology is great or necessary as preferences exist. People may vastly prefer a physical medium to doing work vs an electronic medium. It may also be an accommodation to provide a physical medium vs an electronic medium. There’s also cases where certain activities cannot be substituted with technology (currently), so it still requires physical resources.

[Cyber]Security in Technology

Technology, like computers and cell phones, can also be misused by students and staff as well, including jailbreaking, installing unauthorized software, and damaging hardware. The human element is almost always the easiest way to bypass any technology’s security system to cripple it. To illustrate my point:

  • Example 1: A post-it note with your password on it next to a workplace computer.
  • Example 2: Sharing your password with someone else so they can use your device or a school provided device in general.
  • Example 3: Keeping the default settings, which are easy to find out about, for a username/password to an administrator account.
  • Example 4: Making “12345” your combination key.
  • Example 5: You might also be working at the Louvre and have the password set to “Louvre” for your surveillance system (Leath & Geho, 2025).
    • While that’s easy to remember, that’s just as easy to guess and cause havoc with.

There’s also more points to consider with technology:

  • User privileges (e.g. students and staff) may be deliberately limited to mitigate permanent damage through software in devices vs administrator privileges (e.g. IT teams).
  • Unauthorized software may violate licenses and school policies, introducing more legal trouble.
  • Damages may lead to financial responsibility and paying back losses.
  • Malware and insecure software on one system can easily and quickly spread to every other connected system in a network.

Though technological misuse can be mitigated through proper training, humans still remain one of the easiest, if not the easiest, ways to cause damage to technology in any setting. This applies to schools, homes, businesses, and many more places.

As an example of human engineering above: there’s a possibility you have security systems up-to-date and ready to defend against intrusions, but something like an employee’s computer got breached and now there’s an open door into every other system ripe for the taking.

To dilute technological security (cybersecurity) down into two points:

  1. Technology, and its security, is far more protected with better protocols and tools compared to the past. Said protocols are continuously improving over time.
  2. At the same time, how much damage you can do with technology makes people extremely vulnerable compared to what was possible in the past.

Lastly, if you ever design something and have even the slightest concern about security risks, then design it as if an attacker was already inside or anyone accessing it can compromise it.

  • i.e. Worry about how to stop further damage.
  • Don’t give everyone administrator access.

From Medieval Security to Modern Security

Imagine you have 100 servers to store your sensitive data, like personal records, transaction information, bank account information, and more. Now pretend those servers are located inside of a fortress.

It’s easier to defend one gate into all the servers versus 100 gates to access the servers. By reducing the attack surface and funneling attackers into one control point, you increase the overall defense of the fortress.

This concept applies to medieval times and modern times. In modern times, you may see this with authentication, monitoring, and authorization checks before entry is permitted. A physical school building may have one secure entry point visitors go into, check in at, and receive permission at before safely proceeding forward. Since you only have one entry point, you want to do everything to ensure proper safety but also do everything quickly to reduce queues/waiting time.

While it’s possible to get in through unorthodox means, it’s far harder to do so by limiting the number of entries past your “walls.” Though you might need to increase how many gates you build as you scale up in size, you’re strict on limiting the number of possible entries. What was a strategy for survival is now a risk mitigation strategy.

AI in General

Getting AI to do something isn’t the same as you doing something yourself. It’s like a client describing what they want to your business. The client that wants the design made (the person utilizing the AI, in this case) isn’t the designer; the business (you) fulfilling the request is. Following that logic, a client cannot, in good faith, claim themselves as a designer. Intent is not authorship.

Most AI use in education is doing the work for someone else and should not to be confused with getting the work done. If a student knows the content, they should not need AI to explain, defend, or interpret the content. Students doing the appropriate practice and work on their own to achieve mastery is part of the learning process. Without it, critical thinking and other important skills suffer.

  • For example: the text you’re reading now. AI could generate an answer for everything here, but I know the material here because I did the human element; putting in the work and writing the thing in the first place. The act of going through the work reinforced my knowledge, which is crucial for learning.
  • AI used in this way also permeates a dangerous, false sense of competency, like using RegEx (Regular Expressions) you don’t understand.

If you combine a substitute of mastery with a society diminishing the value of intelligence, honesty, integrity, and personal responsibility, humanity’s educational level and ability to accomplish tasks is likely, if not certainly, liable to diminish.

Here are some questions for you:

  • “When should you use AI?”
  • “When should you NOT use AI?”
  • “How do you know what AI generates is true and correct?”
  • “When can you use a simpler method instead of an advanced AI tool to solve your problem?”
  • “At what point will AI fail to solve your problem(s)?”
  • “Will using AI benefit your instruction?”

If you cannot confidently answer these questions, that’s OK, but I will tell you that you shouldn’t use AI yet. If you do have solid answers, that’s good.

For those with a math or science background unsure where to start or new to AI, I’ll point a finger at Google’s Machine Learning Crash Course on AI/LLMs (Google Developers, 2025).

  • If you’re curious about how AI Detectors may work, focus on topics under the “Classification” section to help figure that out.

For non-technical readers, here’s AI in plainer terms:

  • Artificial Intelligence (AI) takes data and information, tries to find patterns and relationships within it based on prior knowledge (i.e., what it’s trained on), and generates results from its findings.

Before continuing, I’ll emphasize important points about AI that mirror my views on it (Cybersecurity and Infrastructure Security Agency [CISA], 2024).

  • No matter how AI is used, you’re responsible for the outcome. Always verify outputs.
  • Don’t feed it sensitive data, like medical records, legal documents, credit card statements, and/or confidential information in general. It can use, log, and transmit that data into a public space, and put you at risk of fraud, theft, and more.
  • Even with its advantages, AI can remove learning opportunities and retention of information.
  • Your students WILL use AI if able to. Make plans for its use.
  • You don’t always have to incorporate AI into a classroom. It’s a tool, like any other.
  • AI can hallucinate wrong answers with confidence.

That said, AI is a complicated topic. The velocity, volume, value, variety, and veracity of AI development increase year after year, month after month, and much of what I say could be invalidated within less than a year, if not 5 years.

According to Simon Willison on June 2025, in December 2024 to June 2025 alone there were multiple advancements in AI, including but not limited to (Willison, 2025):

  • Amazon’s Nova models
  • DeepSeek V3
  • Meta’s Llama 3 Series
  • Mistral Small 3
  • Many, many more

The Hidden Complexity of AI

This is where problems are far more noticeable, but not necessarily where problems start. It’s also not just a case of “trash in, trash out” either.

Let’s pretend I’m a human acting as AI for an example. Say you ask me, a human, to cook eggs for breakfast. I interpret the task as is and try to complete it. When you receive the eggs though, they aren’t what you were hoping they’d be.

What went wrong? The answer: accidental complexity.

Many things could be wrong. I could be an amateur chef who has never cooked eggs in their life before. I could also be a well-renowned chef who’s cooked eggs tens of thousands, if not hundreds of thousands of times.

When someone typically delivers a task to AI, like Gemini or ChatGPT, they often send the request with the assumption said AI knows the context and underlying intent of the sender. Much like with asking it of humans, that isn’t always the case. There’s a high chance, if not certainty, deficiencies in knowledge are where processes and things are confidentally made up to try and reach a desired goal.

Let’s go back to when you asked me to make eggs. You may not have, however, specified, or at least considered, several key requirements essential to this process, such as:

  • How you want the eggs prepared
  • What type of tools should I cook with
  • Is the cooking equipment available for use
  • What ingredients should I omit/use
  • Is all the data organized (cleaned) properly or do I need to work off messy data

The lack of a one-size-fits-all, easy solution is intentional for many tasks. One example is where Fred Brooks describes essential complexity vs accidental complexity and states “the complexity of the design itself is essential” (1986). Think of accidental complexity as issues you could eliminate whereas essential complexity are issues you can only mitigate. Accidental complexity can take shape in many forms, such as:

  • Lack of volume
  • Data/knowledge infrastructure is messy or not organized
  • Metrics of success, or what counts as success, is poorly defined
  • Level of ongoing maintenance and cost required
  • Processes not mapped out or mapped out ineffectively

To go back to eggs: essential complexity, in this case, is the system and interactions of finding, preparing, and serving the eggs for consumption. Accidental complexity is a countless array of other steps one could take to reach that same conclusion, such as substitutions, alternative cooking methods, and more. That complexity builds significantly faster when you automate a “generalist” to do everything rather than a “specialist” to do only a limited set of items. In all cases, automation complexity increases as the systems behind tasks in general also increase.

The very nature of cooking has complexity built into it no matter which steps you take. If you’re careless, the entity doing the request will work based on what it knows, guess what it doesn’t know, and try to meet requirements based on what you describe–whether the result is good or bad. It doesn’t stop at just cooking either; it spans across myriad fields and situations.

The Akinator Files

When I was much younger, there was a web browser game called Akinator.

Is it AI? Nope, it was basically a binary search tree.

  • I’m aware that’s a massive oversimplification; don’t worry.

Did it look like magic to me? At the time, yes.

The concept was pretty simple. You receive a list of questions you answer to supply details about your character, animal, or person you were thinking of. Your only responses were:

  • Yes
  • No
  • Don’t Know
  • Probably
  • Probably Not

While there’s multiple choices, the only possible values are 0 or 1. Don’t know, probably, and probably not only affect the probability (specifically likelihood/confidence) of 1 or 0, whereas Yes or No directly confirm a 0 or 1. That means, for each question, there’s only two true states and your answer pushes the state closer to either 0 or 1.

In more technical terms, for n number of questions it asks, it tries to find the answer from 2^n possibilities. For 10 questions, that is 1024 possibilities. For 20 questions, that becomes 1048576 possibilities. 30 questions? 1073741824 possibilities. 33 questions means 8589934592 possibilities, or successfully finding one person from ~8.6 billion people on Earth.

The more questions you answer, the more likely it is to narrow down the correct answer. It’s really easy for a computer to track all of this, but much harder for a human.

To look at it in reverse: Say you have a really large list of options and each option has many values assigned to it. If your first question is “no,” it could assume all options with a value of “yes” for the first question are not correct. That means it moves options that don’t match from its available selection pool to another pool and doesn’t need to look through all those available options again for further questions.

  • In case of the player providing a dummy/wrong answer, intentionally or not, it can also “reference” the pool it moved previously eliminated options to and backtracks a few nodes/steps.
  • More technically speaking, it is O(log n) time complexity.

If it doesn’t guess right at the end? That’s perfectly fine; it asks what your actual answer was and adds it into the database, using the values of yes/no you supplied as values to assign to your option. If multiple people think of the same thing later on, it can later calibrate those results and fine-tune the values for each option.

Combine that concept with over 10+ years of many people using Akinator and many characters all added into the database. All that information makes it a massive list to reference. Because more people supply data to it, the statistical model for guessing improves due to more available, and better, data.

AI in Instruction

Where AI applies to teaching and students specifically is not likely to change easily. Here are some examples I’ve seen and would categorize below. Additionally, a lot of things AI can do, automation, like a script or macro, can probably do instead at far cheaper costs and lower error rates.

Examples of “Allowed” AI Use

  • Repeating redundant tasks usually done manually, like grading
  • Streamlining administrative and clerical work
  • Generating concepts, examples, and ideas to create a solution from on human terms
  • Compare submissions to answer keys for quick grading
  • Students with disabilities are explicitly allowed to use it to help them learn
  • Speech-to-text

Examples of “Prohibited” AI Use

  • Completing assignments for students
  • Feeding AI confidential or classified information
  • As a replacement for learning new topics
  • Using it as a substitute for critical thought and problem-solving
  • Cheating
  • Making AI videos of teachers/students (legal issues on this one)

As a stern reminder, anything listed under “Recommended AI Use” still requires a human to verify its accuracy and ensure that the outcome is what you intended.

How effective is AI Really?

In my opinion, AI’s current implementation is not terribly effective (yet), but it has a knack for exceeding expectations in weird ways. It can do things fast, but make mistakes just as fast. That’s why you still need a human verifying AI outputs. You can make some parallels to AI learning like how a human may learn though.

Whether or not AI itself is inherently good or bad is a different argument concerning ethics. This applies to anything, depending on how it’s used and affects people, like food, medicine, guns, and software. While you shouldn’t ignore ethical implications, it’s extremely difficult to reason through and sufficiently cover here.

What it can do really well is pattern based work. Even then, it’s still unintelligent and requires a significant amount of time and data training to reach competency. This encompasses many types of specifically trained repetitions of tasks like speech-to-text, image generation, moderation, and embedding.

  • E.g. It may know tomato is a fruit, but may not know not to put it in fruit salad.

AI cannot understand like a human can. Overdependence on AI by humans breeds normalized incompetence, which will be a rapidly growing problem alongside ever-growing costs as data complexity increases. It can generate answers like a human can, which means it can generate incorrect answers and slip up like a human can too.

  • AI models often give an answer, even under uncertainty, rather than say they don’t know. They don’t know that they don’t know.
  • Cleaning up messes is generally harder than preventing messes in the first place.

Another problem I see with AI is when people want to automate large projects or work at large scale. It is a resource sink in every way possible, whether through time, cost, staff, and management. The more things you try to automate, the resources required to create and maintain that automation exponentially increases. If you really need to automate something, then I’d say three things before attempting to do so:

  1. Start small. Extremely small. Start with one tiny problem.
  2. Assume whatever solution you make will cause problems and you need to maintain it for a long time.
  3. Your operating costs may drastically increase with AI vs no AI.

I would say AI is better for people with prior domain expertise, as they can differentiate right and wrong and boost productivity with it. AI, however, is detrimental to those with lower background knowledge as it’s more likely to create technical debt and these people may not accurately, or precisely, tell if outputs are good or bad.

  • This aligns with views I’ve seen from other professionals in the technology sector and their perspectives, such as Denis Stetskov on his post in September 25, 2025.
  • Technical debt can also be avoided by not over-engineering something that doesn’t matter in the next six months or doesn’t need to service 1+ million, or 1+ billion, users in the foreseeable future.

Despite that viewpoint, I wouldn’t be surprised if someone told me people were addicted to AI, what it provides, and its capabilities. It’s a technological marvel enabling the average user to interact with data and models with plain languages like English rather than coding languages like Python and R. It can solve problems in seconds what used to take some people days or longer to achieve. It can create life-saving medicines and discover methods humans may not normally achieve in their lifetimes.

All of this is to say AI is extremely powerful for someone who could’ve never done these things before without it, so it’s no surprise to me sudden and easy access to it may cause attachment.

It isn’t a stretch to believe AI acquired sensitive information and private security documents through user accounts, which should put any cybersecurity professionals on high alert. Many people may unintentionally, or intentionally, insert private/legally protected information into an AI model, which means it then can utilize that data. AI can, and has, been utilized by humans as a means to manipulate, deceive, and attack other humans and resources, such as the AI-powered PromptLock ransomware (Cherepanov, 2025). I should further emphasize publicly/commercially available models are capable of these feats, showing accessibility is an additional concern.

  • A seemingly innocent case may be using AI to parse content and generate summaries of a website on a search engine so you don’t have to visit the website to learn about something (Law & Guan, 2026).

There’s also only so much data available to train an AI model on. A lot of available data is built upon centuries and millenia of prior information generated by people and translated into machine readable formats. It’s entirely possible to “run out” of data to feed an AI to let it solve problems, which may inhibit its progress and slow improvements to its functions.

Views towards AI may also be distorted by administrative, managers, and directors thinking they can use AI to replace junior staff as well. That, however, means eliminating people you can train up to be seniors with domain expertise. If a company, nation, etc. invests so much into AI it replaces human labor, and by extension paid human labor for “free” AI labor, then how do humans afford goods and services or contribute to the economy?

AI can also use survivorship bias to its advantage through selection pressure; a force causing a particular trait/attribute to more likely survive in certain conditions. For example, say you publicly transmit the outputs of AI into a wider audience. That same audience can report whether it’s made by AI or a human. Most AI content would get correctly flagged this way, but some AI-written content still makes it through. People reporting content may also state why it appears AI-written as data to utilize. You can train a model on which content was flagged vs which content was not flagged, and even why it was flagged, to increases the chances of creating unflagged content. The implications for this are scary; as more AI content passes as human-made, it becomes harder to detect what is developed by AI and what is developed by humans.

Lastly, AI is an umbrella for many types of automation and large language models (LLMs) are one item under that umbrella. While general AI may not be performing all that well, highly specialized machine learning tools dedicated to fields like astronomy, law, production, translation, and even medicine perform a select set of tasks and pattern recognition exceedingly well. Machine learning, while often lumped together with LLMs (large language models) and General AI (AGI), is extremely useful with real potential for improvements, but I’d caution against overexaggerating its capabilities.

Cost of AI Implementation?

In general, cost and latency (time taken) scale with scope. The larger the scope (or more general), the more it costs.

The most straightforward cost is the sheer quantity of data required to train AI (i.e. be “artificial general intelligence”). It’s millions, if not billions or even trillions, of data files, documents, text, images, videos, and more inputted into a single model for training. The necessary amount increases with the more tasks you want AI to do and decreases with fewer tasks. Additionally, the more data to process, the more time it takes to process. This can mean months, if not years, of time required to train one model to sufficient standards.

AI and related systems, like LLMs, have many of the following traits:

  • AI frequency/usage can multiply costs like complexity does and accrue for each user request in addition to any initial setup costs.
  • Expanding depth or breadth alone is costly, even with simple or complex problems.
  • Investing into both depth and breadth, or the ability to handle multiple areas well, is exponentially costly.
  • Data complexity for training AI exponentially increases with more depth or breadth.
  • The costs exist in both free and paid AI products consumers access.
  • Despite any potential benefits, AI, LLMs, and automation in general can quickly force its designers/hosts into negative ROI (return on investment) and operate at a loss.

Though automation isn’t AI, it still suffers from similar issues.

As for who pays, or has ownership of, the cost of AI/automation, that’s probably one of three situations:

  1. The user/employee directly incurs a monetary cost.
  2. A technical and/or financial team/department, like IT or FinOps, manages the costs.
  3. The head(s) of the organization manages the costs.

There could be other situations, but those three above seem the most likely. Nevertheless, because there is a cost, it needs to be paid or else you accrue debt and suffer from undue/unexpected expenses. If it remains unowned/unused, it’s a resource to try and delete so it no longer drains monetary resources. If you need an effective, though crude, method to find who needs a resource, disable or remove it and wait until you find the person that complains the loudest.

I’ll emphasize this part only talks about cost and complexity in automation. It doesn’t cover whether risk increases with complexity, the breakeven points of complexity with task automation vs manual tasking, the Pareto efficiency of adding more automation features, and many more applications. I believe highlighting underlying costs is important as it may be overshadowed by all the benefits AI could provide.

You could also design your own AI instead of utilizing another AI service another provider has. Even if both you and the provider have the exact same models, parameters, and code, you may not have to pay high costs per use, but you may lack the sheer processing power and quality of life a provider can afford through more resources and technology backing their models.

  • People may not even care if you can make a copy of a service for cheaper too, because it’s not that brand’s product. Name-brand recognition is very powerful.

Potential Solution(s) to AI in Class Settings

As a reminder, AI can do some tasks really well and other tasks it flops down like a sad pancake on a griddle. If anything, it may make educators go more analog and avoid its implementation.

Still, if a AI program writes your essay or does an assignment in general for you, it’s called cheating and academic dishonesty. You’re claiming you did the work, despite another entity actually doing it for you instead.

It can also make 10+ students give out the exact same answers, word-for-word, on their essays which are later reviewed and graded by a teacher. It does make homework and other assignments done outside-of-class much harder to verify the integrity of, but the quality is still questionable.

To counter this, a teacher may resort to doing in-person, handwritten tests and other materials at the school, in a monitored area, without the use of any assistive technology (i.e. paper and pencil). Accommodations may alleviate some of these restrictions, but likely not all of them to preserve educational integrity. If it’s handwritten, there’s also fewer techological barriers and fewer excuses for why something isn’t done.

It also means adapting how the classroom functions to get more work done in class instead of outside class. That may put an additional strain on teachers, but it does mitigate the issue of letting a student use AI to do the work for them.

I suppose if you want to try a different twist: you could create an assignment requiring AI usage and declare cheating for those who did the assignment without AI.

  • You could probably defend it by saying “the assignment asked for a specific software to complete it, but the student didn’t use that software so they failed.”

Bibliography

  1. Brooks, Frederick P. (1986). “No Silver Bullet—Essence and Accident in Software Engineering” (PDF). Proceedings of the IFIP Tenth World Computing Conference: 1069–1076.

  2. Cherepanov, Anton. (2025, August 26). First known AI-powered ransomware uncovered by ESET Research. Welivesecurity.com. https://www.welivesecurity.com/en/ransomware/first-known-ai-powered-ransomware-uncovered-eset-research/

  3. Cybersecurity and Infrastructure Security Agency. (2024, September). Secure our world: Using AI – Tip sheet. https://www.cisa.gov/sites/default/files/2024-09/Secure-Our-World-Using-AI-Tip-Sheet.pdf

  4. Flynn, A. N., Takahashi, T., Sim, A., & Brunstrom, J. M. (2025). Dish swap across a weekly menu can deliver health and sustainability gains. Nat Food 6, 843–847. https://doi.org/10.1038/s43016-025-01218-8

  5. Google Developers. (2025). Machine learning crash course. https://developers.google.com/machine-learning/crash-course

  6. Law, R., Guan, X. (2026, February 4). Update: AI Overviews Reduce Clicks by 58%. SEO Blog by Ahrefs. https://ahrefs.com/blog/ai-overviews-reduce-clicks-update/

  7. Leath, M., & Geho, L. (2025, November 6). Password to Louvre’s video surveillance system was “Louvre”, according to employee. ABC News. https://abcnews.go.com/International/password-louvres-video-surveillance-system-louvre-employee/story?id=127236297

  8. Ptacek, T. (2025, June 2). My AI skeptic friends are all nuts. Fly.io Blog. https://fly.io/blog/youre-all-nuts/

  9. Scott, R. (1982). Blade Runner. Warner Bros.
    • Screenplay by Hampton Fancher and David Peoples.
  10. Stetskov, D. (2025, September 25). AI Won’t Save Us From the Talent Crisis We Created. Substack.com; From the Trenches. https://techtrenches.substack.com/p/ai-wont-save-us-from-the-talent-crisis

  11. Wald, Abraham. (1943). A Method of Estimating Plane Vulnerability Based on Damage of Survivors. Statistical Research Group, Columbia University. CRC 432 — reprint from July 1980. Archived 2019-07-13 at the Wayback Machine. Center for Naval Analyses.
  12. Willison, S. (2025, June 6). The last six months in LLMs, illustrated by pelicans on bicycles. https://simonwillison.net/2025/Jun/6/six-months-in-llms/

Next Chapter