mimo_ofdm_neural_receiver¶
This module implements a neural network-based receiver for MIMO-OFDM systems.
Configuration¶
- class demos.mimo_ofdm_neural_receiver.src.config.Config(perfect_csi=False, cdl_model='D', delay_spread=3e-07, carrier_frequency=2600000000.0, speed=0.0, num_bits_per_symbol=BitsPerSym.QPSK)[source]¶
Bases:
objectGlobal configuration container for a MIMO-OFDM simulation setup.
This dataclass centralizes all simulation parameters and derived objects (ResourceGrid, StreamManagement, LDPC code lengths) to ensure consistency across Tx, Channel, and Rx components. Parameters are divided into two categories:
User-settable: Can be modified per experiment (e.g.,
cdl_model,perfect_csi). These control the simulation scenario.Hard-coded (immutable): PHY/system constants validated for this demo (e.g.,
fft_size,num_bs_ant). Attempting to modify these after initialization raisesAttributeError.
- Parameters:
perfect_csi (bool, (default False)) – If True, the receiver uses ground-truth channel state information instead of LS-estimated CSI. Useful for establishing performance upper bounds.
cdl_model ({"A", "B", "C", "D", "E"}, (default "D")) – 3GPP CDL channel model variant. Models A-C are NLOS with increasing delay spread; D-E are LOS. Model D provides moderate multipath suitable for neural receiver training.
delay_spread (float, (default 300e-9)) – RMS delay spread in seconds. Controls the temporal dispersion of the channel. Typical urban values: 100-500 ns.
carrier_frequency (float, (default 2.6e9)) – Carrier frequency in Hz. Affects Doppler spread and path loss characteristics in the CDL model.
speed (float, (default 0.0)) – UE speed in m/s. Zero indicates a static channel (no Doppler). Non-zero enables time-varying fading.
num_bits_per_symbol (BitsPerSym, (default BitsPerSym.QPSK)) – Modulation order. Accepts
BitsPerSymenum or equivalent int. Higher orders increase spectral efficiency but require better SNR.
Note
The
build()method is called automatically in__post_init__. After initialization, immutable fields are locked and cannot be changed.n == rg.num_data_symbols * num_bits_per_symbolk == n * coderatenum_streams_per_tx == num_ut_ant(one stream per UT antenna)
Example
>>> cfg = Config(cdl_model="C", perfect_csi=True) >>> print(cfg.rg.num_data_symbols) >>> print(cfg.k, cfg.n) # LDPC code dimensions
- num_bits_per_symbol: BitsPerSym¶
- build()[source]¶
Construct derived objects (ResourceGrid, StreamManagement, LDPC lengths).
This method computes all dependent configuration objects from the base parameters. It is called automatically during
__post_init__and should not typically be called directly.Pre-conditions¶
All base parameters (
_fft_size,_num_ofdm_symbols, etc.) must be set to valid values.
Post-conditions¶
_rgcontains a fully configured ResourceGrid._smcontains StreamManagement for single-user MIMO._nand_ksatisfy the coderate relationship._num_streams_per_tx == _num_ut_ant(spatial multiplexing).
- returns:
Self reference for method chaining.
- rtype:
Config
Note
The StreamManagement matrix
[[1]]indicates a single TX-RX pair. Each UT antenna carries an independent data stream.- Return type:
- property rg: sionna.phy.ofdm.ResourceGrid¶
Configured OFDM resource grid with pilot pattern.
- Type:
ResourceGrid
- property sm: sionna.phy.mimo.StreamManagement¶
MIMO stream-to-TX/RX mapping configuration.
- Type:
StreamManagement
- property num_streams_per_tx: int¶
Number of spatial streams per transmitter (equals num_ut_ant).
- Type:
Channel State Information¶
- class demos.mimo_ofdm_neural_receiver.src.csi.CSI(cfg)[source]¶
Bases:
objectChannel State Information generator for MIMO-OFDM simulations.
This class manages the complete channel generation pipeline from antenna array configuration through to frequency-domain channel coefficients. A single
CSIinstance should be shared across all pipeline components (Tx, Channel, Rx) to ensure consistent channel realizations.The channel generation follows the 3GPP TR 38.901 CDL model:
Antenna Arrays: Configured with dual cross-polarized elements following the 38.901 antenna pattern specification.
CDL Channel: Generates delay-domain channel impulse response with configurable model (A-E), delay spread, and Doppler.
Frequency Response: CIR is converted to frequency-domain via
cir_to_ofdm_channelfor OFDM processing.
- Parameters:
cfg (Config) – Configuration object containing PHY parameters (carrier frequency, antenna counts, CDL model selection, etc.).
- remove_nulled_scs¶
Utility layer for extracting channel coefficients on active (non-nulled) subcarriers. Used by Rx for perfect-CSI path.
- Type:
RemoveNulledSubcarriers
Note
The
build()method must be called once per simulation batch to generate a new channel realization. The returnedh_freqtensor should be passed to both the Channel and Rx components.Example
>>> cfg = Config(cdl_model="C", carrier_frequency=3.5e9) >>> csi = CSI(cfg) >>> h_freq = csi.build(batch_size=32) >>> # Use h_freq with Channel and Rx
- build(batch_size)[source]¶
Generate frequency-domain channel response for a batch of samples.
This method generates new channel impulse responses from the CDL model and converts them to frequency-domain coefficients. Each call produces an independent channel realization (different random path gains).
- Parameters:
batch_size (int or tf.Tensor) – Number of independent channel realizations to generate. Can be a Python int or a scalar TensorFlow tensor.
- Returns:
h_freq – Frequency-domain channel response with shape [batch, num_rx, num_rx_ant, num_tx, num_tx_ant, num_ofdm_symbols, fft_size].
Each element
h_freq[b,r,ra,t,ta,s,f]is the complex channel gain from TX antennataof transmittertto RX antennaraof receiverr, on OFDM symbolsand subcarrierf, for batch sampleb.- Return type:
tf.Tensor, complex64
Note
The CDL model internally uses Sionna’s global RNG seeded in
__init__. For reproducible results across runs, ensurecfg.seedis fixed and no other code modifiessionna.phy.config.seedbetween calls.Channel varies across OFDM symbols if
cfg.speed > 0(Doppler).For
cfg.speed == 0, channel is static within each batch sample but varies across batch samples.
Transmitter¶
- class demos.mimo_ofdm_neural_receiver.src.tx.Tx(cfg, channel_coding_off=False)[source]¶
Bases:
objectMIMO-OFDM Transmitter with optional LDPC encoding.
Implements the transmit processing chain that generates OFDM resource grids from random information bits. The chain consists of:
Binary Source: Generates random bits (information or coded).
LDPC Encoder (optional): Applies 5G NR LDPC encoding.
QAM Mapper: Maps bit sequences to constellation symbols.
Resource Grid Mapper: Places symbols and pilots on OFDM grid.
- Parameters:
cfg (Config) – Configuration object containing modulation, coding, and resource grid parameters.
channel_coding_off (bool, (default False)) – If True, bypasses LDPC encoding and generates random coded bits directly. Used during training to avoid backpropagating through the non-differentiable encoder.
Note
In training mode, the neural receiver learns to predict LLRs for random bit patterns. The BCE loss compares predicted LLRs against the known transmitted coded bits
c, enabling gradient-based optimization.Example
>>> cfg = Config(num_bits_per_symbol=BitsPerSym.QPSK) >>> tx = Tx(cfg, channel_coding_off=False) >>> out = tx(batch_size=32, h_freq=h_freq) >>> print(out["b"].shape) # Information bits >>> print(out["x_rg"].shape) # Transmitted resource grid
Baseline Receiver¶
- class demos.mimo_ofdm_neural_receiver.src.rx.Rx(cfg, csi)[source]¶
Bases:
objectConventional MIMO-OFDM receiver with LS estimation and LMMSE equalization.
Implements the standard receive processing chain used as a baseline for neural receiver comparison. The chain consists of:
Channel Estimation: LS estimation at pilot positions with nearest-neighbor interpolation to data positions.
LMMSE Equalization: Linear minimum mean square error spatial filtering to separate MIMO streams.
Soft Demapping: APP (a posteriori probability) demapper producing soft LLR values for each coded bit.
LDPC Decoding: 5G NR LDPC decoder producing hard bit decisions.
- Parameters:
Note
The receiver shares the
CSIinstance with the transmit chain to ensure the same channel realization is used for both transmission and perfect-CSI reception. This is critical for fair performance evaluation.Example
>>> cfg = Config(perfect_csi=False) >>> csi = CSI(cfg) >>> rx = Rx(cfg, csi) >>> out = rx(y, h_freq, no) >>> decoded_bits = out["b_hat"]
Neural Receiver¶
- class demos.mimo_ofdm_neural_receiver.src.neural_rx.ResidualBlock(*args, **kwargs)[source]¶
Bases:
LayerResidual block with convolutions and layer normalization.
Implements a pre-activation residual block where normalization and activation precede each convolution. The skip connection enables gradient flow through deep networks and allows the block to learn residual refinements rather than full transformations.
- Architecture per layer:
LayerNorm -> ReLU -> Conv2D(3x3)
The block applies
num_resnet_layerssuch layers sequentially, then adds the input via skip connection.- Parameters:
- Raises:
ValueError – If
num_resnet_layers < 1.
Note
Layer normalization is applied over spatial and channel dimensions (axes -1, -2, -3) rather than batch normalization. This provides more stable training with small batch sizes and varying SNR conditions.
The 3x3 kernel with ‘same’ padding preserves spatial dimensions, allowing the skip connection to work without dimension adjustment.
- call(inputs)[source]¶
Apply residual transformation to input tensor.
- Parameters:
inputs (tf.Tensor, float32, [batch, height, width, channels]) – Input feature maps. Channel dimension must match
num_conv2d_filtersfor the skip connection to work.- Returns:
tf.Tensor, float32, [batch, height, width, channels] – Output feature maps with same shape as input.
Pre-conditions
————–
- Input must be float32 (assertion checks this for debugging).
Input channels should equal
num_conv2d_filters.
Post-conditions
—————
- Output shape equals input shape.
- Output = transform(input) + input (residual connection).
Invariants
———-
- Spatial dimensions are preserved (3x3 conv with ‘same’ padding).
- class demos.mimo_ofdm_neural_receiver.src.neural_rx.NeuralRx(*args, **kwargs)[source]¶
Bases:
LayerConvolutional neural receiver mapping received signals to LLRs.
This network replaces the traditional channel estimation, equalization, and demapping stages with a learned CNN that directly produces log-likelihood ratios for each coded bit. The architecture processes the received signal across a time-frequency dimensional grid.
- Architecture:
Input preparation: Concatenate [Re(y), Im(y), log10(no)]
Input convolution: Expand to
num_conv2d_filterschannelsResidual stack:
num_res_blocksresidual blocksOutput convolution: Reduce to
num_streams x bits_per_symbolReshape: Reorganize to per-stream, per-bit LLR format
Resource grid demapper: Extract data symbol positions
LDPC decoder (optional): Decode to information bits
- Parameters:
cfg (Config) – Configuration containing resource grid, modulation, and code params.
channel_coding_off (bool, (default False)) – If True, skip LDPC decoding and return raw LLRs. Used during training to compute BCE loss against transmitted coded bits.
num_conv2d_filters (int, (default 128)) – Channel dimension throughout the residual stack.
num_resnet_layers (int, (default 2)) – Number of conv layers per residual block.
num_res_blocks (int, (default 4)) – Number of residual blocks in the network.
Note
The noise power is fed in log10 scale because: 1. SNR varies over orders of magnitude during training 2. Log scale provides more uniform gradient behavior 3. Empirically improves convergence and final performance
Example
>>> cfg = Config(num_bits_per_symbol=BitsPerSym.QPSK) >>> neural_rx = NeuralRx(cfg, channel_coding_off=True) >>> out = neural_rx(y, no, batch_size) >>> llrs = out["llr"] # Shape: [batch, 1, num_streams, n]
- call(y, no, batch_size)[source]¶
Process received signal to produce LLRs and optionally decoded bits.
- Parameters:
y (tf.Tensor, complex64, [batch, num_rx, num_rx_ant, num_ofdm_symbols, fft_size]) – Received OFDM signal after channel and noise.
no (tf.Tensor, float32, [batch] or scalar) – Noise power spectral density.
batch_size (tf.Tensor, int32, scalar) – Batch dimension size (needed for reshape operations in graph mode).
- Returns:
Dictionary containing:
"llr": Predicted log-likelihood ratios, shape [batch, 1, num_ut_ant, n]."b_hat": Decoded information bits, shape [batch, 1, num_ut_ant, k]. None ifchannel_coding_off=True.
- Return type:
Dict[str, tf.Tensor]
Note
The tensor transformations in this method follow a specific sequence:
Remove num_rx dimension (assuming single receiver)
Transpose to [batch, ofdm_symbols, subcarriers, antennas]
Split complex to real channels: 2xnum_rx_ant + 1 (noise) channels
Process through CNN
Reshape output to match ResourceGridDemapper expectations
Extract data positions and reshape for decoder input
System¶
- class demos.mimo_ofdm_neural_receiver.src.system.System(*args, **kwargs)[source]¶
Bases:
ModelEnd-to-end MIMO-OFDM system with baseline and neural receiver options.
This Keras Model composes all simulation components and provides a unified interface for both training and inference. The system generates transmitted signals, applies channel effects, and processes received signals through either a conventional or neural receiver.
The processing pipeline is:
CSI Generation: Create frequency-domain channel response
Transmission: Generate bits, encode, modulate, map to OFDM grid
Channel: Apply frequency-domain channel and add AWGN
Reception: Process received signal (baseline LMMSE or neural CNN)
Output: Return loss (training) or bit tensors (inference)
- Parameters:
training (bool, (default False)) – If True, configure for training mode: - Disable channel coding in Tx/Rx - Return BCE loss instead of bit tensors
perfect_csi (bool, (default False)) – If True, baseline receiver uses ground-truth CSI. Only affects baseline Rx; neural Rx never uses explicit CSI.
cdl_model ({"A", "B", "C", "D", "E"}, (default "D")) – 3GPP CDL channel model variant.
delay_spread (float, (default 300e-9)) – RMS delay spread in seconds.
carrier_frequency (float, (default 2.6e9)) – Carrier frequency in Hz.
speed (float, (default 0.0)) – UE speed in m/s for Doppler modeling.
num_bits_per_symbol (BitsPerSym, (default BitsPerSym.QPSK)) – Modulation order.
use_neural_rx (bool, (default False)) – If True, use neural receiver; otherwise use baseline LMMSE receiver.
num_conv2d_filters (int, (default 128)) – Neural receiver CNN width.
num_resnet_layers (int, (default 2)) – Layers per residual block in neural receiver.
num_res_blocks (int, (default 4)) – Number of residual blocks in neural receiver.
name (str, (default "system")) – Keras model name for variable scoping.
- bce¶
Loss function for training (expects logits, not probabilities).
- Type:
tf.keras.losses.BinaryCrossentropy
Note
The system accepts Eb/N0 in dB and internally converts to noise power using
ebnodb2no. This allows consistent SNR specification across different modulation orders and code rates.Both
__call__andcall_scalarare provided: -__call__: Takes vector Eb/N0 (one per batch sample) -call_scalar: Takes scalar Eb/N0 (broadcast to all samples)The scalar variant is required for compatibility with Sionna’s
PlotBER.simulate()which passes scalar SNR values.Example
>>> # Training >>> system = System(training=True, use_neural_rx=True) >>> loss = system(batch_size, ebno_db_vector) >>> # Inference >>> system = System(training=False, use_neural_rx=True) >>> b, b_hat = system(batch_size, ebno_db_vector)
- call_scalar(batch_size, ebno_db_scalar)¶
Forward pass with scalar Eb/N0 (for PlotBER compatibility).
This method broadcasts a single Eb/N0 value to all batch samples, providing compatibility with Sionna’s
PlotBER.simulate()which calls the model with scalar SNR values.- Parameters:
batch_size (tf.Tensor, int32, scalar) – Number of samples in the batch.
ebno_db_scalar (tf.Tensor, float32, scalar) – Eb/N0 in dB, applied uniformly to all batch samples.
- Return type:
See
__call__for return value documentation.
Note
This is a thin wrapper that expands the scalar to a vector and delegates to
__call__.