# Historical Context

Prior to 1936, there were no computers in the sense that we know them today. A “Computer” was a job title for people who could perform complex calculations by hand. Computing devices were limited, but there were mechanical and analog machines that could solve specific problems, but no general machines that could be programmed/configured to solve any problem. To solve a complex problem, you had to do it by hand or build a machanical device specific to your application.

# Turing Machines

Turing propsed an abstract device which could be “programmed” with an “algorithm” to solve nearly any problem. It was a theoritcal model for the modern day computer. Laid the foundation for the development of the modern digital computer.

They consist of an infinite tape divided into cells, a read/write head, and a set of states. They can simulate any algorithmic process. The machine reads a symbol on the tape, moves the head, changes state based on predefined rules.

The 1936 Turing Paper: On Computable Numbers, with an Application to the Entscheidungsproblem - Alan Turing

# Deterministic vs non-deterministic turing machines?

**Deterministic turing machines**: are like today’s computers, they follow specific steps to get to the next state.**Non-deterministic turing machines**: can be thought of as unlimited parallelism. a fancy way of talking about guessing a solution

## Why are non-deterministic turing machines interesting?

They could be used to solve really hard problems like NP problems.

- Problems in NP are still relatively easy: if only we could guess the right solution, we could then quickly test it.
- a way of mathematically formalizing the idea of a brute-force search algorithm

# Areas of Study

- Halting Problem
- Church-Turing Thesis
- Complexity Theory
- Turing Completeness
- Automata Theory

### Halting Problem

Is there an algorithm that can determine if an arbitrary program halts or runs forever. Turing probed it’s not possible. This was a pivotal insight into the limits of computation.

Formulated by Alan Turing, is a fundamental issue in computer science and mathematics. It asks whether there exists a general algorithm that can determine, for any given program and input, whether that program will eventually halt (terminate) or run indefinitely. Turing’s proof shows that such an algorithm is impossible to create, meaning there’s no way to predict whether an arbitrary program will halt or not. This concept highlights the inherent limits of computation and has profound implications for understanding the boundaries of what computers can and cannot do.

### Turing Completeness

Turing completeness refers to the property of a computational system that can simulate a Turing machine. A system is considered Turing complete if it can perform any computation that can be described algorithmically. This concept demonstrates the universality of certain computational systems, showing that **if a system is Turing complete, it can replicate the behavior of any other Turing complete system.** Programming languages, virtual machines, and computational models are often evaluated for Turing completeness to determine their computational power and versatility.