Welcome to this series of Jupyter Notebooks from dspillustrations.com. In this series, we employ the soundcard of your computer as a wireless transmitter and receiver to establish a wireless link using the popular orthogonal frequency division multiplexing technique (OFDM). The audio output and loudspeaker will model the transmit antenna, whereas your microphone will mimic the receive antenna. In between there is the wireless channel, i.e. the environment of the room you are sitting in. In the notebooks of this series we assemble the fundamental building blocks of a basic OFDM transmitter and receiver.
In this notebook, which is unrelated to OFDM, we setup the audio environment. Therefore, we transmit a constant frequency sine wave and plot the time-domain waveform and frequency spectrum of the signal at the receiver side. In addition we introduce the general streaming framework we use for subsequent notebooks.
First, we import some standard packages and enable the IPython autoload extension
.
import sys; sys.path.append('..')
import matplotlib
import matplotlib.pyplot as plt
%matplotlib notebook
%load_ext autoreload
%autoreload 2
import numpy as np
import sys; sys.path.append('..')
# Source code has been redacted in online version
# Omitting 4 lines of source code
%matplotlib notebook
Now, we import the required classes from the supplied library audioComms
which contains the fundamental building blocks for our system:
from audioComms import Environment
Every experiment needs to run in an environment, which is represented by an instance of the class Environment
. In the environment, we register several components. Subsequently, we start the audio streaming by running the environment.
from audioComms import TX
An instance of the class TX
describes a component which generates the transmit signal. We will use it as the base class for our waveform transmitter below.
from audioComms.channels import AudioChannel, SimulatedChannel
The AudioChannel
class implements the actual audio streaming, i.e. replaying the waveform at the audio output and recording the microphone input. The SimulatedChannel
class does not perform audio transmission, but simulates the transmission in software.
from audioComms.plotting import PlotSpectrum, PlotWaveform
Eventually, PlotSpectrum
und PlotWaveform
visually display the received signal as a frequency or time-domain plot.
Let's define our signal generator below. We subclass the TX
class and overwrite the _generalSignal
method. This method is called, whenever the transmit signal audio buffer is running out of data. In this case, we have to generate more transmit samples. Here, in order to generate a sine wave with no phase jumps, we save the number of transmitted already samples and adjust the argument to the sine function accordingly.
class TransmitSine(TX):
def __init__(self, environment, Fc=440):
super().__init__(environment)
self._Fc = Fc # the frequency of the sine wave
self._numsamples = 0 # the number of transmitted samples
def _generateSignal(self):
N = 10000
t = (np.arange(N) + self._numsamples) / self._samplerate
self._numsamples += N
return np.sin(2*np.pi*self._Fc*t)
In the following code cell, we setup our experiment. In general, we think of the following signal flow between the blocks:
%%tikz -l positioning
\tikzset{block/.style={draw,thick,minimum width=2cm,minimum height=1cm}}
\node (T) [block] {TransmitSine};
\node (C) [block,right=of T] {AudioChannel};
\node (P1) [block,right=of C] {PlotSpectrum};
\node (P2) [block,below right=of C] {PlotWaveform};
\draw [-stealth] (T) -- (C);
\draw [-stealth] (C) -- (P1);
\draw [-stealth] ([xshift=0.5cm]C.east) |- (P2);
Hence, we first define our environment with the samplerate to use. Second, we create the components in the diagram and third, we establish the connections between the building blocks. Eventually, we start the experiment. You can stop the experiment with the "Stop" button in the toolbar.
Attention: Due to some jupyter caveats, the stop button does not always work reliably. In case the system cannot be stopped with the stop button, you can use another python process to stop the system via UDP:
- go to the
tools
directory- run
python stopEnv.py
# Source code has been redacted in online version
# Omitting 21 lines of source code
If everything works well, you will hear a smooth sine tone of 440Hz frequency. In addition, the signal recorded by the microphone should show a sine wave of 440Hz with varying amplitude, depending on the volume and position of the microphone. Also, you can see the spectrum of the received signal. Interestingly, as my soundcard does it, you see the mean peaks at 440Hz, but also harmonics at 1320Hz (3 times 440Hz).
If you cannot hear a tone or no signal is recorded, please refer to the documentation of python-sounddevice how to setup the audio system. In addition, you might need to associate the right audio device to the python process in the operating system's settings.
Even though it is interesting and encouraging to actually hear the transmitted signals, soon the sound will be annoying your (and your neighbors/parents/flatmates/partners) ears. In this situation, just use a cable between the audio output and microphone input, which shortcuts the wireless link.
In contrast to the audio channel, where we never know exactly, what the device is doing, we can also use a channel simulator, which in the simplest case is ideal, i.e. its output is exactly equal to its input. Let's run the same example with the simulated channel:
# Source code has been redacted in online version
# Omitting 21 lines of source code
We have a nice sine wave of 440Hz and amplitude 1, equal to the transmitted signal at the channel output. Moreover, the spectrum is perfect with no harmonices appearing.
We have successfully set up the audio interface to transmit and receive waveforms. Let us now proceed to the real signal processing parts of this course with considering up- and downconversion.
Copyright (C) 2018 - dspillustrations.com