# Introduction

Cell phones are all around us. We use them every day to communicate nearly instantly with friends, family, and random internet strangers across the globe. The fact that we are able to send messages so quickly is truly a technological marvel and brings together innovations in physics, electrical engineering, information theory, computer science, and more. This blog post hopes to illuminate how these magical devices work.

There is no way that this post will be able to cover every aspect of wireless communications (as you can devote your entire life to studying the subject), but I hope to distill it into manageable components that allow for further study if desired.

Each section will be broken up into two subsections: one accessible to those with a non-technical background and one for those with more physics and math knowledge. The more general sections will be denoted by a 😎, while the more advanced sections will be denoted by a 🤓. They are as follows:

1. A Brief History of Communication
2. The Communication Problem
3. The Layered Model of Communication
4. A Primer On Electromagnetic Waves (Radio Waves & Light)
5. How Information is Encoded in Light (Modulation)
6. Antennas
7. The Network of Cell Towers
8. Putting It All Together

Hopefully, this post will give you a taste of how the theory of information works closely with other fields of science and engineering to produce massively effective and efficient systems.

# A Brief History of Communication

Just to give a sense of where we are and where we have come from, here is a by-no-means-complete timeline of the history of communication:

• ~1.75 Million Years Ago: First humans use spoken language to yell at kids for playing with rocks.
• ~3000 BCE: Ancient Mesopotamians develop Sumerian, the first written language, to write down their laws, stories, and more.
• ~540 BCE: Cyrus of Persia establishes one of the first postal services, where messages traveled along a network of roads.
• ~1200 CE: People begin to use carrier pigeons to transport messages in a one-way fashion.
• ~1450 CE: Johannes Gutenberg invents the printing press, allowing for mass production of books and other writing.
• ~1650 CE: British navy pioneers using flags as signals for different messages at sea.
• May 24, 1844: Samuel Morse sends first telegraph message: “What hath God wrought?”
• March 10, 1876: Alexander Graham Bell makes first telephone call.
• 1885: Heinrich Hertz conducts experiments sending and receiving radio waves.
• Early 1920s: Bell Labs tests car-based telephone systems.
• 1973: Motorola’s Martin Cooper invents the first handheld cellular phone.
• 2007: First iPhone
• 2019: 5G cell phone technology

# The Communication Problem

The basic idea of communication is to move information from point A to point B as accurately and efficiently as possible. Let’s take a closer look at what this actually means.

😎

According to Merriam Webster, information is defined as “knowledge obtained from investigation, study, or instruction.” This is a nice colloquial definition, but not exactly what we are going for in an information theoretic sense.

Another way to look at it is that information is anything that can be described. An encoding is, therefore, the description that you choose that uniquely determines the what thing you have in mind. It’s important to note that not all encodings are created equally. Instead of writing the number “2”, your friend could have also written: “go to the bagel shop next to the hardware store on the corner of Main Street.” These two descriptions may be about the same thing and after reading them you may have the same place in mind, but one of them is more concise than the other.

Looking at information in a general sense, information theorists try to find ways to efficiently encode information (usually in 0’s and 1’s), as it is easier to send and receive smaller descriptions than bigger ones. There’s even a couple of cool mathematical theories that state that the most efficient way to encode information is with bits!

The communication problem can be summarized by the following diagram:

You have some message (in this case, “hey, want to hang out?”) that you want to send to your friend. You want him to be able to reconstruct the information in your message, so you send it across a channel that has noise. In other words, when you send your message, there is a chance that there is some distortion, e.g. that your message becomes “hey, want to work out?”.

Another way to think of noise is to think of you and your friend in the bagel shop in Bagelville. You’ve been dying to tell her about your newfound obsession with cat memes, so you try to communicate using your words. Since you chose the most popular shop in town, there are tons of people also talking and music playing in the background that makes it harder for your friend to understand you. This process is also an example of noise. You can think of noise as anything that makes it harder to discern the actual information of a message. Information theorists call the actual content of the message the signal, and one of the goals of engineering communication systems is to maximize the ratio between the signal and the noise, called the signal to noise ratio or SNR for short.

🤓

Information turns out to be something that you can quantitatively measure. Given the link between information and uncertainty, we can define a quantity that describes how much uncertainty we have about a given source of information called entropy. To make things more concrete, we need to utilize the mechanics of probability theory.

Consider a discrete random variable $U$ that can take values from an alphabet $\mathcal{U} = \{ u_1, u_2, \dots, u_M\}$. For each value $u_i$, there is a probability $p(U=u_i)$ that denotes how likely $U$ is to take on the value $u_i$. We can now define the “surprise” function: $S(u) = \log \frac{1}{p(u)}$ (here $\log = \log_2$). To gain some intuition about this object, consider the extremes of the input. If $p(u) = 1$, then the surprise function will evaluate to zero, and if $p(u) << 1$, then the surprise function will be very large. We are now ready to define the entropy of the random variable $U, H(U)$.

$H(U) = \mathbb{E}[S(U)] = -\sum_u p(u) \log p(u)$

You can think of the entropy as the expected surprise of the random variable. In other words, on average, how surprised are you when you see a manifestation of the random variable?

Entropy is linked to information in the following sense. The more entropy a random variable has, the more information is required to describe it. Think about the extreme situations. If a random variable is essentially deterministic (say always takes the value “1”), then you can just convey that it always takes on “1”. But if a random variable is uniformly distributed, you need to describe that it is sometimes “1”, sometimes “2”, sometimes “3”, etc. This definition of entropy can be expanded to more complex distributions, e.g. joint and conditional ones, by replacing the probability in the log with the desired distributions.

Now that we have entropy, we can also define a measure of information called mutual information. It essentially describes how easy it is to reconstruct one random variable from another. It is defined as:

$I(X;Y) = H(X) - H(X|Y)$

Note that if $X$ and $Y$ are independent, then the mutual information will be 0.

But how does this all relate to communication? Consider the diagram below…

Taken from EE376A Lecture 7 Notes 2018

Essentially, we are trying to convey a message $X^n$ to our friend through a noisy channel which produces a (potentially) altered message $Y^n$ based on the statistics of the channel $P_{Y^n|X^n}$. Note that given this probability distribution for the channel (and a little more information about $X$), we can calculate the mutual information between $X$ and $Y$. The higher the mutual information, the better the channel and the easier it is to reconstruct the original message.

# The Layered Model of Communication

Telecommunications is a complex problem to tackle. To make it manageable, people much smarter than me have developed the Open Systems Interconnection (OSI) Model that has seven layers of communication that incorporate both software and hardware components. The layers are shown here:

The general idea is that when a user interfaces with layer 7 (an application, e.g. email) and hits “send”, the information is converted into simpler and simpler forms as it moves down the layers and then gets sent over the physical link in its simplest form. The intended recipient of the message then receives the physical signal and reconstructs it into something legible to a human. Each layer has protocols (read: languages and customs) that everyone agrees on so that data processing can happen seamlessly.

Since in this post I will mainly focus on layer 1, I will give a quick general overview of what the different layers do and examples of their manifestations.

1. The Physical Layer: This layer is responsible for the barebones transport of bits (0s and 1s) from point A to point B. This can be done through voltages, light, or other media. The protocols specify all the basics about the communication: the rate of data transfer, what a 0 looks like, what a 1 looks like, etc. Examples of the physical layer include Bluetooth, USB, and Ethernet.
2. The Data Link Layer: The data link layer connects two directly connected devices (e.g. a computer and a router on the same wifi) in a network and helps establish when they can talk to each other. This layer also performs some rudimentary error correction on the physical layer when it messes up. The most well known of these protocols is the Media Access Control (MAC) which gives permission to devices on a network to talk to each other.
3. The Network Layer: This layer connects different local networks together. You may have your wifi router in your home in California, but you want to send an email to someone in Florida. The Internet Protocol (IP) provides a way to find efficient routes from one node in a network to another.
4. The Transport Layer: This layer ensures that all of the data that you are trying to send accurately makes it to the intended recipient. An example of this is the  Transmission Control Protocol (TCP).
5. The Session Layer: When two devices need to talk to each other, a session is created that stays open as long as the devices are communicating. This layer handles the mechanics of setting up, coordinating, and terminating a session.
6. The Presentation Layer: When a raw stream of bits is received, it is not very useful unless you know what they are for. If you took the bits for a picture and put them in a text editor, you would get something really weird, but when you open them with picture viewing software, e.g. Preview, you can clearly see what the image is. The presentation layer can be thought of like the different file types that people use. Examples include JPEG, GIF, and more.
7. The Application Layer: This is where you, the user, come in. The application is the thing that presents the data to the end user, whether it be a web browser or email.

Don’t worry if you didn’t understand all of that. For the rest of the post, we will focus mainly on the physical link.

# A Primer on Electromagnetic Waves

Electromagnetic (EM) waves are essential to nearly everything we do. They often act as a carrier of information that travels INCREDIBLY fast. Take light as an example. Light is essential for our visual faculties, and photons (little particles of light) that bounce off of the objects around us reach our eyes almost instantly, giving us a real-time view of what is happening. Because of this, it is often said that “light is the carrier of information.” This section aims to give you a basic background on what EM waves are and why they are important to wireless communications.

😎

You may be familiar with electricity and magnetism, but what you may not know is that they are two sides of the same coin. When the strength of something electric changes, magnetism pops up. When the strength of something magnetic changes, electricity pops up. EM waves are bundles of electricity and magnetism that change in such a way that creates a self-reinforcing cycle.

A Diagram of an Electromagnetic Wave (Source)

As you can see in the diagram above, the strength of the electricity and magnetism “waves” from high to low, which thus creates a “waving” pattern in the other force. James Maxwell Clerk, known as the father of Electricity and Magnetism, discovered that these waves can propagate in free space at the speed of light ($c =$ 186,000 mi/s). To put that speed into perspective, if you were traveling at the speed of light, you could travel from New York to London in about 18 milliseconds.

How quickly the strength of the electricity and magnetism changes is called the frequency of the wave. Frequency is measured in a unit called Hertz, which is basically the number of peaks that you see in a second. The more peaks per second, i.e. the higher the frequency, the more energy that wave has. Frequency is closely related to the wavelength of the wave, which is the spatial distance between consecutive peaks. The two are related by the equation $v = \lambda f$, where $\lambda$ is the wavelength (measured in meters), $f$ is the frequency (measured in Hertz $= 1/s$), and $v$ is the speed of the wave (measured in meters per second). The larger the wavelength at a given speed, the smaller the frequency. Electromagnetic waves span a wide range of frequencies, and the different “bands” are used for different purposes. The spectrum is shown below.

The Electromagnetic Spectrum (Source)

You may be familiar with X-rays and microwaves from your everyday experience in the doctor’s office or the kitchen, but you might not know that visible light, i.e. the light that you see with your eyes, is made of the same “stuff”! For the purpose of cell phones, we are going to focus on the far right of the spectrum: radio waves.

Radio waves are great for telecommunications because they travel at the speed of light and are good at traveling long distances without getting disturbed. We will get into how cell phones turn your text messages into radio waves later.

🤓

(This section will be a little shorter as I will assume more knowledge of EM waves). Electromagnetic waves are a consequence of Maxwell’s Equations, which are a set of four differential equations discovered in part by James Clerk Maxwell in the 1860s.

Maxwell’s Equations (Source)

Here, $\mathbf{E}$ is the electric field, $\mathbf{H}$ is the magnetic field, and $\mathbf{J}$ is the current density, which isn’t really important for waves traveling in free space. If you solve the last two equations for $\mathbf{E}$ and $\mathbf{H}$, you will find a form of the wave equation:

The Wave Equation for the Electric Field (Source)

This gives rise to sinusoidal solutions that look like the ones in the pictures above. In free space, the waves are allowed to have any frequency and propagate at the speed of light $c$.

# Encoding Information Into Light

The process of encoding information into a wave is called modulation. It’s used all the time in radio communications and in cellular communications. In this section, I’ll try and give you a sense of how it works.

😎

Let’s consider what happens when you type a text message into your phone. First, the letters that you type are turned into a series of bits (0s and 1s). There is a standard for doing this with individual letters called the ASCII standard. Each letter is assigned a number, and that number is converted into a series of bits. You can think of the bits as answering a yes or no question about the letter. The first bit could answer the question: “Is the letter in the first half of the alphabet?” If it is “1”, then we know it is in the first half and we can discard the other letters as a possibility. The second bit can then represent us asking, “Is the letter in the first half of the first half of the alphabet?” (or alternatively, “Is the letter in the first half of the second half of the alphabet?” if the first bit is a “0”). We continue this process until we know precisely what letter we have.

Once we have the series of bits that we are going to send, we can turn the bits into a wave. There are several ways to do this, so I’ll explain the simplest one. The signal “waves” for a little bit to represent a “1” and doesn’t “wave” for a little bit to represent a “0”. The following diagram should make this a little more clear for the sequence 1011:

Courtesy of Prof. Andrea Goldsmith

As you can see from the picture, once we receive the signal we can simply measure the amplitude of the wave for a given period to determine whether the current bit is a zero or a one. It’s important to note that the sender and the receiver must agree on how long these 0’s and 1’s last and at what frequency they are being sent, or else the message will not be able to be recovered. We will explain how these pulses are physically sent out in the next section.

🤓

The goal of modulation is to take a signal $x(t)$ (analog or digital) and turn it into an analog signal $y(t)$ that has some carrier frequency $\omega_c$, which can then be transmitted as an electromagnetic wave. There are two stages: modulation and demodulation.

1. Modulation: We take the signal $x(t)$ and multiply it by the carrier $c(t) = \cos (\omega_c t)$ to get $y(t)$. We can then use an antenna (explained in next section) to send this signal out.

The Process of Modulation (Courtesy of Prof. Joseph Kahn)

2. Demodulation: Once we receive the signal $y(t)$, we multiply it by the carrier again to get

$v(t) = c(t) y(t)= x(t) \cos^2(\omega_c t) = \frac{1}{2}x(t) + \frac{1}{2} x(t) \cos(\omega_c t)$

If we then apply a filter to the signal that gets rid of the high frequencies, we are left with the half the original signal. A schematic is shown below.

Demodulation (Courtesy of Prof. Joseph Kahn)

There are a litany of extra steps that need to be taken (compression and error correction) that need to happen if you are transmitting digital signals, but they are outside the scope of this post. Here are some resources on compression and error correction.

# Antennas

So now that we can turn our information into something that can be sent as an electromagnetic wave, how do cell phones actually send signals out that get to other phones? If this description leaves you unsatisfied, check out this website for a comprehensive guide to how antennas work.

😎

An antenna is basically a piece of metal connected to a special kind of battery that is able to send and receive electromagnetic waves. It does so by applying electricity in such a way that creates the particular wave (as described in the EM primer section). The size of the antenna matters, as it needs to be at least as big as half of the wavelength scale that you are trying to send.

Each antenna also has a specific directivity, which a measure of how concentrated in space the radiation is. Directivity is closely related to antenna size, and the larger the antenna, the more focused you can make your beam in one direction. Since your phone has a relatively small antenna, it generally radiates waves out isotropically, i.e. in all directions. Cell phone towers are basically huge antennas, and they are able to specifically beam radiation in the general direction of your phone rather than everywhere. This allows them to not waste power sending signals in every direction.

One important concept for antennas is called the bandwidth. Since a single antenna can radiate multiple frequencies of radiation, we can define bandwidth as the difference between the highest and lowest frequencies of radiation. This concept will become important when we discuss the cellular grid. Most cell phone systems operate in the 3-30 GHz (billion Hertz) range.

🤓

To make an antenna, we need a piece of wire hooked up to an alternating current source that can accelerate the electrons in the wire at the frequency we want to radiate. When the electrons move, they cause a change in the electric field that consequently causes a shift in the magnetic field around. This process continues and the result is electromagnetic waves. There needs to be impedance matching with the incoming transmission line, however, as otherwise, the propagating signals will become out of phase with each other and power will be lost. This mostly matters at higher frequencies, which is important for cellular communications.

In addition to affecting directivity (defined above), the size of an antenna dictates how many different frequencies it can radiate. A general rule of thumb is that an antenna of size $l$ can radiate wavelengths of length $\lambda = 2l$.

One cool thing you can do with antennas is putting many of them together in an array. This allows you to interfere the waves with each other in such a way that increases your directivity. They are also generally more efficient as they can radiate more power and increase the signal-to-noise ratio of the incoming messages.

If you haven’t, make sure you read the definition of bandwidth above, as it will be important later.

An Example of an Array of Cellular Antennae on a Cell Phone Tower (Source)

# The Network Of Cell Towers

There are nearly 100,000 cell phone towers in the United States alone. In this section, I’ll try to explain how your phone talks to the tower and then how your message gets to its intended recipient. The general setup is depicted below:

A Schematic of Cellular Communication

😎

Cell phones are called “cell” phones because of the cellular nature of the arrangement of cell towers. Each one is given a region that it governs, and when your phone has service, it means that it is in communication with the nearest tower. A toy example of the grid is shown below:

The towers are arranged in a hexagonal lattice because it provides maximal coverage for the fewest number of towers. If they were arranged according to circles, there would be blackout spots that would not have any coverage, and if they were arranged in squares, then there would be a higher variability of signal strength in the cell.

Individual towers are connected by a series of fiber optic cables that use light (yes, visible light) to transmit information from tower to tower. The exact nature of how a message gets from point A to point B is outside the scope of this post, but if you are interested, you can read more on the Internet Protocol. For messages to go overseas, e.g. to Europe or Asia, the messages travel through cables that have been laid down under the sea.

A Map of the Submarine Cables in the World (Source)

🤓

This grid of cellular towers would not work if all the towers were on the same frequency range. You can think of a frequency “band” as a channel over which communication can occur. If you and I try to use the same channel at the same time, our messages could interfere with each other and disrupt service. Towers that are next to each other therefore cannot be on the same frequency band. The hexagonal organization of the towers also allows for the same frequency to be reused with some spatial separation.

For each cell tower, there are hundreds or even thousands of phones trying to use cellular services at the same time. So how do cellular communication systems solve this problem? The answer lies in a technique called multiplexing. The basic idea of multiplexing is dividing the channel into different buckets by time or frequency and putting different messages in different buckets. Below is a depiction of what time-domain multiplexing looks like (where the different colors represent different users of the channel). Since cell phones operate at frequencies in the gigahertz range, they are able to fit in many time “buckets” per unit time.

Time Domain Multiplexing (Courtesy of Prof. Joseph Kahn)

Similarly, you can do the same thing in the frequency domain. Below is what frequency-domain multiplexing looks like (where again the different colors represent different users of the channel):

Frequency-Domain Multiplexing (Courtesy of Prof. Joseph Kahn)

You can combine the two schemes to maximize the amount of information per unit time. This is where the concept of bandwidth comes into play. If we have high bandwidth, we can fit many more “buckets” in the channel and therefore transmit information at a higher rate.

# Putting it all together

In summary, this is what happens when you press send on your message.

1. Your message gets encoded into a series of bits that represent the information you are trying to convey in a concise and efficient manner.
2. Your phone’s internal computer figures out how to modulate the signal so that it can be sent out as electromagnetic radiation.
3. The cell phone’s antenna radiates the message with some meta-information (e.g. who the recipient is, what kind of data it is) to the nearest cell tower.
4. The cell tower receives the message and decides which node in the network is the best to send the message to.
5. Step 4 repeats as the message arrives at the cell tower closest to your friend.
6. The final cell tower radiates the same signal to your friend’s phone.
7. Your friend’s phone demodulates, decrypts, and decompresses the signal and displays it on the screen.

If you have any questions or want more resources, feel free to email me at yous.hindy@gmail.com, and I’d be happy to send you resources.

# Acknowledgments

I’d like to thank Prof. Tsachy Weissman for advising me on this project and providing me with guidance and enthusiasm at every step of the way. I’d also like to thank Professors Jon Fan, Andrea Goldsmith, and Joseph Kahn for taking the time to meet with me and sharing the resources that made this post possible.

# Outreach Event @ Nixon Elementary

On March 17, 2019, as part of the EE376A course, I presented my work to a group of students from Nixon Elementary School in Stanford, CA. They ranged from K-5th grade and found the topic pretty fascinating. I didn’t realize that most of them didn’t own cell phones, but they were all familiar with the ones that their parents use. It certainly was difficult explaining these topics at a 1st-grade level, but it made writing this post a lot easier, as I had to really think deeply about these topics and how they could be simplified. Below is a picture of the poster that was presented. I also had a deconstructed cell phone and was able to show them the various components on the phone’s board like the various antennas, microphones, speakers, etc.

# Compression of High-Dimensional Neural Recordings

Uncategorized

In a recent conversation with a neuroscience professor at Stanford, I found out that his lab spends over eight-thousand dollars per month storing data. A clear case for lossless compression! In the following report, we explore lossless compression for high-dimensional neural electrode recordings on a sample dataset from Neuropixels probes. We compare universal compressors, wavelet-based methods, and a simple delta coding scheme, which gives roughly 50% compression on our dataset.

## Neuropixels and the Data Storage Problem

#### A Brief history of neural recording

There is a growing understanding among neuroscientists that in order to understand the brain, we simply need more data. Starting in the 1940s, scientists have used recordings from single neurons (so-called single unit recordings) as a way to probe how the brain works. Since the first recordings were made using glass electrodes, the number of neurons from which we are able to record has doubled approximately every 7 years, following a Moore’s law-like pattern \cite{stevenson2011advances}.

The most recent of these advances are called Neuropixels Probes. These are CMOS fabricated probes capable of recording from up to 384 channels (and thus from a similar number of individual neurons) simultaneously \cite{neuropixel}. Prior to this, state-of-the-art neural recording used MEMS fabricated arrays capable of recording a maximum of around 100 neurons.

All this is to say that there is a data revolution doing on in neuroscience, and scientists will need to store all of this data.