mimo_ofdm_neural_receiver

This module implements a neural network-based receiver for MIMO-OFDM systems.

Configuration

class demos.mimo_ofdm_neural_receiver.src.config.Config(perfect_csi=False, cdl_model='D', delay_spread=3e-07, carrier_frequency=2600000000.0, speed=0.0, num_bits_per_symbol=BitsPerSym.QPSK)[source]

Bases: object

Global configuration container for a MIMO-OFDM simulation setup.

This dataclass centralizes all simulation parameters and derived objects (ResourceGrid, StreamManagement, LDPC code lengths) to ensure consistency across Tx, Channel, and Rx components. Parameters are divided into two categories:

  1. User-settable: Can be modified per experiment (e.g., cdl_model, perfect_csi). These control the simulation scenario.

  2. Hard-coded (immutable): PHY/system constants validated for this demo (e.g., fft_size, num_bs_ant). Attempting to modify these after initialization raises AttributeError.

Parameters:
  • perfect_csi (bool, (default False)) – If True, the receiver uses ground-truth channel state information instead of LS-estimated CSI. Useful for establishing performance upper bounds.

  • cdl_model ({"A", "B", "C", "D", "E"}, (default "D")) – 3GPP CDL channel model variant. Models A-C are NLOS with increasing delay spread; D-E are LOS. Model D provides moderate multipath suitable for neural receiver training.

  • delay_spread (float, (default 300e-9)) – RMS delay spread in seconds. Controls the temporal dispersion of the channel. Typical urban values: 100-500 ns.

  • carrier_frequency (float, (default 2.6e9)) – Carrier frequency in Hz. Affects Doppler spread and path loss characteristics in the CDL model.

  • speed (float, (default 0.0)) – UE speed in m/s. Zero indicates a static channel (no Doppler). Non-zero enables time-varying fading.

  • num_bits_per_symbol (BitsPerSym, (default BitsPerSym.QPSK)) – Modulation order. Accepts BitsPerSym enum or equivalent int. Higher orders increase spectral efficiency but require better SNR.

Note

  • The build() method is called automatically in __post_init__. After initialization, immutable fields are locked and cannot be changed.

  • n == rg.num_data_symbols * num_bits_per_symbol

  • k == n * coderate

  • num_streams_per_tx == num_ut_ant (one stream per UT antenna)

Example

>>> cfg = Config(cdl_model="C", perfect_csi=True)
>>> print(cfg.rg.num_data_symbols)
>>> print(cfg.k, cfg.n)  # LDPC code dimensions
perfect_csi: bool
cdl_model: Literal['A', 'B', 'C', 'D', 'E']
delay_spread: float
carrier_frequency: float
speed: float
num_bits_per_symbol: BitsPerSym
build()[source]

Construct derived objects (ResourceGrid, StreamManagement, LDPC lengths).

This method computes all dependent configuration objects from the base parameters. It is called automatically during __post_init__ and should not typically be called directly.

Pre-conditions

  • All base parameters (_fft_size, _num_ofdm_symbols, etc.) must be set to valid values.

Post-conditions

  • _rg contains a fully configured ResourceGrid.

  • _sm contains StreamManagement for single-user MIMO.

  • _n and _k satisfy the coderate relationship.

  • _num_streams_per_tx == _num_ut_ant (spatial multiplexing).

returns:

Self reference for method chaining.

rtype:

Config

Note

The StreamManagement matrix [[1]] indicates a single TX-RX pair. Each UT antenna carries an independent data stream.

Return type:

Config

property rg: sionna.phy.ofdm.ResourceGrid

Configured OFDM resource grid with pilot pattern.

Type:

ResourceGrid

property sm: sionna.phy.mimo.StreamManagement

MIMO stream-to-TX/RX mapping configuration.

Type:

StreamManagement

property k: int

Number of information bits per LDPC codeword.

Type:

int

property n: int

Number of coded bits per LDPC codeword.

Type:

int

property num_streams_per_tx: int

Number of spatial streams per transmitter (equals num_ut_ant).

Type:

int

property direction: str

Link direction, either ‘uplink’ or ‘downlink’.

Type:

str

property subcarrier_spacing: float

OFDM subcarrier spacing in Hz.

Type:

float

property fft_size: int

FFT size determining the number of subcarriers.

Type:

int

property num_ofdm_symbols: int

Number of OFDM symbols per slot/frame.

Type:

int

property cyclic_prefix_length: int

Cyclic prefix length in samples.

Type:

int

property num_guard_carriers: Tuple[int, int]

Number of guard subcarriers (lower, upper).

Type:

Tuple[int, int]

property dc_null: bool

Whether the DC subcarrier is nulled.

Type:

bool

property pilot_pattern: str

Pilot pattern type (e.g., ‘kronecker’).

Type:

str

property pilot_ofdm_symbol_indices: Tuple[int, ...]

OFDM symbol indices containing pilots.

Type:

Tuple[int, …]

property num_ut_ant: int

Number of user terminal antennas (transmit side in uplink).

Type:

int

property num_bs_ant: int

Number of base station antennas (receive side in uplink).

Type:

int

property modulation: str

Modulation type (e.g., ‘qam’).

Type:

str

property coderate: float

LDPC code rate (k/n ratio).

Type:

float

property seed: int

Random seed for reproducible simulations.

Type:

int

class demos.mimo_ofdm_neural_receiver.src.config.BitsPerSym(value)[source]

Bases: IntEnum

Enumeration of supported modulation orders.

Maps modulation scheme names to their bits-per-symbol values. The integer value represents log2 of the constellation size.

BPSK = 1
QPSK = 2
QAM16 = 4

Channel State Information

class demos.mimo_ofdm_neural_receiver.src.csi.CSI(cfg)[source]

Bases: object

Channel State Information generator for MIMO-OFDM simulations.

This class manages the complete channel generation pipeline from antenna array configuration through to frequency-domain channel coefficients. A single CSI instance should be shared across all pipeline components (Tx, Channel, Rx) to ensure consistent channel realizations.

The channel generation follows the 3GPP TR 38.901 CDL model:

  1. Antenna Arrays: Configured with dual cross-polarized elements following the 38.901 antenna pattern specification.

  2. CDL Channel: Generates delay-domain channel impulse response with configurable model (A-E), delay spread, and Doppler.

  3. Frequency Response: CIR is converted to frequency-domain via cir_to_ofdm_channel for OFDM processing.

Parameters:

cfg (Config) – Configuration object containing PHY parameters (carrier frequency, antenna counts, CDL model selection, etc.).

cfg

Reference to the configuration object.

Type:

Config

remove_nulled_scs

Utility layer for extracting channel coefficients on active (non-nulled) subcarriers. Used by Rx for perfect-CSI path.

Type:

RemoveNulledSubcarriers

Note

The build() method must be called once per simulation batch to generate a new channel realization. The returned h_freq tensor should be passed to both the Channel and Rx components.

Example

>>> cfg = Config(cdl_model="C", carrier_frequency=3.5e9)
>>> csi = CSI(cfg)
>>> h_freq = csi.build(batch_size=32)
>>> # Use h_freq with Channel and Rx
build(batch_size)[source]

Generate frequency-domain channel response for a batch of samples.

This method generates new channel impulse responses from the CDL model and converts them to frequency-domain coefficients. Each call produces an independent channel realization (different random path gains).

Parameters:

batch_size (int or tf.Tensor) – Number of independent channel realizations to generate. Can be a Python int or a scalar TensorFlow tensor.

Returns:

h_freq – Frequency-domain channel response with shape [batch, num_rx, num_rx_ant, num_tx, num_tx_ant, num_ofdm_symbols, fft_size].

Each element h_freq[b,r,ra,t,ta,s,f] is the complex channel gain from TX antenna ta of transmitter t to RX antenna ra of receiver r, on OFDM symbol s and subcarrier f, for batch sample b.

Return type:

tf.Tensor, complex64

Note

  • The CDL model internally uses Sionna’s global RNG seeded in __init__. For reproducible results across runs, ensure cfg.seed is fixed and no other code modifies sionna.phy.config.seed between calls.

  • Channel varies across OFDM symbols if cfg.speed > 0 (Doppler).

  • For cfg.speed == 0, channel is static within each batch sample but varies across batch samples.

Transmitter

class demos.mimo_ofdm_neural_receiver.src.tx.Tx(cfg, channel_coding_off=False)[source]

Bases: object

MIMO-OFDM Transmitter with optional LDPC encoding.

Implements the transmit processing chain that generates OFDM resource grids from random information bits. The chain consists of:

  1. Binary Source: Generates random bits (information or coded).

  2. LDPC Encoder (optional): Applies 5G NR LDPC encoding.

  3. QAM Mapper: Maps bit sequences to constellation symbols.

  4. Resource Grid Mapper: Places symbols and pilots on OFDM grid.

Parameters:
  • cfg (Config) – Configuration object containing modulation, coding, and resource grid parameters.

  • channel_coding_off (bool, (default False)) – If True, bypasses LDPC encoding and generates random coded bits directly. Used during training to avoid backpropagating through the non-differentiable encoder.

_cfg

Reference to configuration object.

Type:

Config

_channel_coding_off

Whether encoding is bypassed.

Type:

bool

_num_streams_per_tx

Number of spatial streams (equals number of UT antennas).

Type:

int

Note

In training mode, the neural receiver learns to predict LLRs for random bit patterns. The BCE loss compares predicted LLRs against the known transmitted coded bits c, enabling gradient-based optimization.

Example

>>> cfg = Config(num_bits_per_symbol=BitsPerSym.QPSK)
>>> tx = Tx(cfg, channel_coding_off=False)
>>> out = tx(batch_size=32, h_freq=h_freq)
>>> print(out["b"].shape)  # Information bits
>>> print(out["x_rg"].shape)  # Transmitted resource grid

Baseline Receiver

class demos.mimo_ofdm_neural_receiver.src.rx.Rx(cfg, csi)[source]

Bases: object

Conventional MIMO-OFDM receiver with LS estimation and LMMSE equalization.

Implements the standard receive processing chain used as a baseline for neural receiver comparison. The chain consists of:

  1. Channel Estimation: LS estimation at pilot positions with nearest-neighbor interpolation to data positions.

  2. LMMSE Equalization: Linear minimum mean square error spatial filtering to separate MIMO streams.

  3. Soft Demapping: APP (a posteriori probability) demapper producing soft LLR values for each coded bit.

  4. LDPC Decoding: 5G NR LDPC decoder producing hard bit decisions.

Parameters:
  • cfg (Config) – Configuration object containing modulation, coding, and CSI settings.

  • csi (CSI) – Channel state information object providing ground-truth channel coefficients for perfect-CSI mode.

_cfg

Reference to configuration object.

Type:

Config

_csi

Reference to CSI object for perfect-CSI path.

Type:

CSI

Note

The receiver shares the CSI instance with the transmit chain to ensure the same channel realization is used for both transmission and perfect-CSI reception. This is critical for fair performance evaluation.

Example

>>> cfg = Config(perfect_csi=False)
>>> csi = CSI(cfg)
>>> rx = Rx(cfg, csi)
>>> out = rx(y, h_freq, no)
>>> decoded_bits = out["b_hat"]

Neural Receiver

class demos.mimo_ofdm_neural_receiver.src.neural_rx.ResidualBlock(*args, **kwargs)[source]

Bases: Layer

Residual block with convolutions and layer normalization.

Implements a pre-activation residual block where normalization and activation precede each convolution. The skip connection enables gradient flow through deep networks and allows the block to learn residual refinements rather than full transformations.

Architecture per layer:

LayerNorm -> ReLU -> Conv2D(3x3)

The block applies num_resnet_layers such layers sequentially, then adds the input via skip connection.

Parameters:
  • num_conv2d_filters (int, (default 128)) – Number of output channels for each convolution. All convolutions in the block use the same filter count.

  • num_resnet_layers (int, (default 2)) – Number of normalization-activation-convolution sequences in the block. Must be at least 1.

Raises:

ValueError – If num_resnet_layers < 1.

Note

Layer normalization is applied over spatial and channel dimensions (axes -1, -2, -3) rather than batch normalization. This provides more stable training with small batch sizes and varying SNR conditions.

The 3x3 kernel with ‘same’ padding preserves spatial dimensions, allowing the skip connection to work without dimension adjustment.

call(inputs)[source]

Apply residual transformation to input tensor.

Parameters:

inputs (tf.Tensor, float32, [batch, height, width, channels]) – Input feature maps. Channel dimension must match num_conv2d_filters for the skip connection to work.

Returns:

  • tf.Tensor, float32, [batch, height, width, channels] – Output feature maps with same shape as input.

  • Pre-conditions

  • ————–

  • - Input must be float32 (assertion checks this for debugging).

    • Input channels should equal num_conv2d_filters.

  • Post-conditions

  • —————

  • - Output shape equals input shape.

  • - Output = transform(input) + input (residual connection).

  • Invariants

  • ———-

  • - Spatial dimensions are preserved (3x3 conv with ‘same’ padding).

class demos.mimo_ofdm_neural_receiver.src.neural_rx.NeuralRx(*args, **kwargs)[source]

Bases: Layer

Convolutional neural receiver mapping received signals to LLRs.

This network replaces the traditional channel estimation, equalization, and demapping stages with a learned CNN that directly produces log-likelihood ratios for each coded bit. The architecture processes the received signal across a time-frequency dimensional grid.

Architecture:
  1. Input preparation: Concatenate [Re(y), Im(y), log10(no)]

  2. Input convolution: Expand to num_conv2d_filters channels

  3. Residual stack: num_res_blocks residual blocks

  4. Output convolution: Reduce to num_streams x bits_per_symbol

  5. Reshape: Reorganize to per-stream, per-bit LLR format

  6. Resource grid demapper: Extract data symbol positions

  7. LDPC decoder (optional): Decode to information bits

Parameters:
  • cfg (Config) – Configuration containing resource grid, modulation, and code params.

  • channel_coding_off (bool, (default False)) – If True, skip LDPC decoding and return raw LLRs. Used during training to compute BCE loss against transmitted coded bits.

  • num_conv2d_filters (int, (default 128)) – Channel dimension throughout the residual stack.

  • num_resnet_layers (int, (default 2)) – Number of conv layers per residual block.

  • num_res_blocks (int, (default 4)) – Number of residual blocks in the network.

_cfg

Reference to configuration object.

Type:

Config

_channel_coding_off

Whether to skip LDPC decoding.

Type:

bool

Note

The noise power is fed in log10 scale because: 1. SNR varies over orders of magnitude during training 2. Log scale provides more uniform gradient behavior 3. Empirically improves convergence and final performance

Example

>>> cfg = Config(num_bits_per_symbol=BitsPerSym.QPSK)
>>> neural_rx = NeuralRx(cfg, channel_coding_off=True)
>>> out = neural_rx(y, no, batch_size)
>>> llrs = out["llr"]  # Shape: [batch, 1, num_streams, n]
call(y, no, batch_size)[source]

Process received signal to produce LLRs and optionally decoded bits.

Parameters:
  • y (tf.Tensor, complex64, [batch, num_rx, num_rx_ant, num_ofdm_symbols, fft_size]) – Received OFDM signal after channel and noise.

  • no (tf.Tensor, float32, [batch] or scalar) – Noise power spectral density.

  • batch_size (tf.Tensor, int32, scalar) – Batch dimension size (needed for reshape operations in graph mode).

Returns:

Dictionary containing:

  • "llr": Predicted log-likelihood ratios, shape [batch, 1, num_ut_ant, n].

  • "b_hat": Decoded information bits, shape [batch, 1, num_ut_ant, k]. None if channel_coding_off=True.

Return type:

Dict[str, tf.Tensor]

Note

The tensor transformations in this method follow a specific sequence:

  1. Remove num_rx dimension (assuming single receiver)

  2. Transpose to [batch, ofdm_symbols, subcarriers, antennas]

  3. Split complex to real channels: 2xnum_rx_ant + 1 (noise) channels

  4. Process through CNN

  5. Reshape output to match ResourceGridDemapper expectations

  6. Extract data positions and reshape for decoder input

System

class demos.mimo_ofdm_neural_receiver.src.system.System(*args, **kwargs)[source]

Bases: Model

End-to-end MIMO-OFDM system with baseline and neural receiver options.

This Keras Model composes all simulation components and provides a unified interface for both training and inference. The system generates transmitted signals, applies channel effects, and processes received signals through either a conventional or neural receiver.

The processing pipeline is:

  1. CSI Generation: Create frequency-domain channel response

  2. Transmission: Generate bits, encode, modulate, map to OFDM grid

  3. Channel: Apply frequency-domain channel and add AWGN

  4. Reception: Process received signal (baseline LMMSE or neural CNN)

  5. Output: Return loss (training) or bit tensors (inference)

Parameters:
  • training (bool, (default False)) – If True, configure for training mode: - Disable channel coding in Tx/Rx - Return BCE loss instead of bit tensors

  • perfect_csi (bool, (default False)) – If True, baseline receiver uses ground-truth CSI. Only affects baseline Rx; neural Rx never uses explicit CSI.

  • cdl_model ({"A", "B", "C", "D", "E"}, (default "D")) – 3GPP CDL channel model variant.

  • delay_spread (float, (default 300e-9)) – RMS delay spread in seconds.

  • carrier_frequency (float, (default 2.6e9)) – Carrier frequency in Hz.

  • speed (float, (default 0.0)) – UE speed in m/s for Doppler modeling.

  • num_bits_per_symbol (BitsPerSym, (default BitsPerSym.QPSK)) – Modulation order.

  • use_neural_rx (bool, (default False)) – If True, use neural receiver; otherwise use baseline LMMSE receiver.

  • num_conv2d_filters (int, (default 128)) – Neural receiver CNN width.

  • num_resnet_layers (int, (default 2)) – Layers per residual block in neural receiver.

  • num_res_blocks (int, (default 4)) – Number of residual blocks in neural receiver.

  • name (str, (default "system")) – Keras model name for variable scoping.

bce

Loss function for training (expects logits, not probabilities).

Type:

tf.keras.losses.BinaryCrossentropy

Note

The system accepts Eb/N0 in dB and internally converts to noise power using ebnodb2no. This allows consistent SNR specification across different modulation orders and code rates.

Both __call__ and call_scalar are provided: - __call__: Takes vector Eb/N0 (one per batch sample) - call_scalar: Takes scalar Eb/N0 (broadcast to all samples)

The scalar variant is required for compatibility with Sionna’s PlotBER.simulate() which passes scalar SNR values.

Example

>>> # Training
>>> system = System(training=True, use_neural_rx=True)
>>> loss = system(batch_size, ebno_db_vector)
>>> # Inference
>>> system = System(training=False, use_neural_rx=True)
>>> b, b_hat = system(batch_size, ebno_db_vector)
call_scalar(batch_size, ebno_db_scalar)

Forward pass with scalar Eb/N0 (for PlotBER compatibility).

This method broadcasts a single Eb/N0 value to all batch samples, providing compatibility with Sionna’s PlotBER.simulate() which calls the model with scalar SNR values.

Parameters:
  • batch_size (tf.Tensor, int32, scalar) – Number of samples in the batch.

  • ebno_db_scalar (tf.Tensor, float32, scalar) – Eb/N0 in dB, applied uniformly to all batch samples.

Return type:

See __call__ for return value documentation.

Note

This is a thin wrapper that expands the scalar to a vector and delegates to __call__.