Tutorial: Differentiable Communication Systems Using Sionna#
Binary Source Generation:
A binary source is created to generate random, independent, and identically distributed (i.i.d.) binary sequences.
Code:
binary_source = sionna.utils.BinarySource()
.
QAM Modulation:
The generated binary sequences are mapped to QAM symbols using a trainable constellation.
The constellation can be made trainable by setting the parameter
trainable=True
during instantiation.Code:
constellation = sionna.mapping.Constellation("qam", NUM_BITS_PER_SYMBOL, trainable=True) mapper = sionna.mapping.Mapper(constellation=constellation)
Transmission through AWGN Channel:
The QAM symbols are transmitted over an Additive White Gaussian Noise (AWGN) channel.
Code:
awgn_channel = sionna.channel.AWGN()
.
Demodulation and Bit Recovery:
The received signal is demodulated to recover the transmitted bits.
Two demapping methods are employed:
Baseline LLR Demapper: Computes the Log-Likelihood Ratios (LLRs) to decode the bits. This serves as the baseline for comparison.
Code:
demapper = sionna.mapping.Demapper("app", constellation=constellation)
.
Neural Demapper: A deep neural network (DNN) with three dense layers is used to predict the bit sequences directly from the received signal. The final layer outputs logits (LLRs).
Code:
class NeuralDemapper(Layer): def __init__(self): super().__init__() self.dense_1 = Dense(64, 'relu') self.dense_2 = Dense(64, 'relu') self.dense_3 = Dense(NUM_BITS_PER_SYMBOL, None) # Outputs logits, i.e., LLRs ... return llr
The neural demapper is trained using a Binary Cross-Entropy (BCE) loss function:
The BCE loss compares the true bits (
bits
) with the predicted logits (llr
) output by the neural demapper.Code:
bce = tf.keras.losses.BinaryCrossentropy(from_logits=True) loss = bce(bits, llr)
Performance Evaluation:
The performance of both demappers is evaluated based on the Bit Error Rate (BER).
The neural demapper is benchmarked against the baseline LLR demapper using BER plots.
Code for performance evaluation:
ber_plots = sionna.utils.plotting.PlotBER("Neural Demapper") ber_plots.simulate(baseline, ebno_dbs=np.linspace(EBN0_DB_MIN, EBN0_DB_MAX, 20), batch_size=BATCH_SIZE) ber_plots.simulate(neural_demapper, ebno_dbs=np.linspace(EBN0_DB_MIN, EBN0_DB_MAX, 20), batch_size=BATCH_SIZE)
Results indicate that the neural demapper achieves performance comparable to the baseline LLR demapper.