## Air Force Office of Air Force Research Laboratory/Air Force Office of Scientific Research University Center of Excellence: Agile Waveform Design for Communication Networks in Contested Environments Research University Center of Excellence

**Developing AI-informed communication and networking protocols**

Rhodes iiD researchers are working with colleagues across the nation to ensure that future communication protocols used by the United States Air Force are suitable for handling the most data-heavy tasks imaginable, such as flying UAVs, and secure from adversarial attack.

The project is led by Robert Calderbank, the Charles S. Sydnor Distinguished Professor of Computer Science, Electrical and Computer Engineering, and Mathematics, and director of the Rhodes Information Initiative at Duke, and Vahid Tarokh, the Rhodes Family Professor of Electrical and Computer Engineering.

The new center also draws in research expertise from Virginia Tech, Princeton University, Carnegie Mellon University, Colorado State University, and Arizona State University.

The project will deepen existing collaborations between the universities involved and the Air Force Research Laboratory (AFRL). By tackling this new challenge, the researchers will increase the capabilities, knowledge, skills, and expertise of the AFRL workforce, while giving its staff opportunities to work with a large pipeline of talented students through programs like Data+ and Code+, both ten-week summer research experiences that pair mixed teams of Duke undergraduate and graduate students with real-life data sets and problems from partnering companies.

### October 31, 2023 Meeting

**Speaker:** Suya Wu, Duke University

Suya left Duke in August, 2023 to take a position at Microsoft

**Title:** Score-based Hypothesis Testing and Chang-point Detection for Unnormalized Models

**Slides:** Score-based Hypothesis Testing and Chang-point Detection for Unnormalized Models (PDF)

(Email: suya.wu@duke.edu)

### August 28, 2023 Meeting

**Speaker: **Michal Yemini, Princeton University

**Title:** Multi-armed bandits with self-information rewards

**Slides:** Multi-Armed Bandits with Self-Information Rewards (PDF)

**Abstract:** In this talk, I will introduce the informational multi-armed bandit (IMAB) model, in which at each round, a player chooses an arm, observes a symbol, and receives an unobserved reward in the form of the symbol’s self-information. Thus, the expected reward of an arm is the Shannon entropy of the probability mass function of the source that generates its symbols. The player aims to maximize the expected total reward associated with the entropy values of the arms played. I will present two UCB-based algorithms for the IMAB model with a known alphabet size, which consider the biases of the plug-in entropy estimator. The first algorithm optimistically corrects the bias term in the entropy estimation. The second algorithm relies on data-dependent confidence intervals that adapt to sources with small entropy values. I will provide performance guarantees by upper bounding the expected regret of each of the algorithms and will show that in the Bernoulli case, the asymptotic behavior of these algorithms is compared to the Lai-Robbins lower bound for the pseudo regret. Finally, we will discuss the interesting case where the exact alphabet size is unknown, and instead, the player only knows a loose upper bound on it, and propose a UCB-based algorithm, in which the player aims to reduce the regret caused by the unknown alphabet size in a finite time regime.

This talk is based on a joint work with Nir Weinberger that was recently accepted for publication in the IEEE Transactions on Information Theory: https://ieeexplore.ieee.org/abstract/document/10196497

(Email: yemini.michal@gmail.com)

### August 7, 2023 Meeting

**Speaker: **Jiin Woo, CMU

**Title**: The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond

**Slides:** The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond (PDF)

**Abstract:** When the data used for reinforcement learning (RL) are collected by multiple agents in a distributed manner, federated versions of RL algorithms allow collaborative learning without the need of sharing local data. In this paper, we consider federated Q-learning, which aims to learn an optimal Q-function by periodically aggregating local Q-estimates trained on local data alone. Focusing on infinite-horizon tabular Markov decision processes, we provide sample complexity guarantees for both the synchronous and asynchronous variants of federated Q-learning. In both cases, our bounds exhibit a linear speedup with respect to the number of agents and sharper dependencies on other salient problem parameters. Moreover, existing approaches to federated Q-learning adopt an equally-weighted averaging of local Q-estimates, which can be highly sub-optimal in the asynchronous setting since the local trajectories can be highly heterogeneous due to different local behavior policies. Existing sample complexity scales inverse proportionally to the minimum entry of the stationary state-action occupancy distributions over all agents, requiring that every agent covers the entire state-action space. Instead, we propose a novel importance averaging algorithm, giving larger weights to more frequently visited state-action pairs. The improved sample complexity scales inverse proportionally to the minimum entry of the average stationary state-action occupancy distribution of all agents, thus only requiring the agents collectively cover the entire state-action space, unveiling the blessing of heterogeneity.

(Email: jiinw@andrew.cmu.edu)

### July 24, 2023 Meeting

**Speaker: **Suya Wu, Duke University

Suya left Duke in August, 2023 to take a position at Microsoft

**Title**: Robust Quickest Change Detection for Unnormalized Models

**Slides:** Score-based Quickest Change Detection for Unnormalized Models (PDF)

**Abstract**: Detecting an abrupt and persistent change in the underlying distribution of online data streams is an important problem in many applications. This paper proposes a new robust score-based algorithm called RSCUSUM, which can be applied to unnormalized models and addresses the issue of unknown post-change distributions. RSCUSUM replaces the Kullback-Leibler divergence with the Fisher divergence between pre- and post-change distributions for computational efficiency in unnormalized statistical models and introduces a notion of the “least favorable” distribution for robust change detection. The algorithm and its theoretical analysis are demonstrated through simulation studies.

(Email: suya.wu@duke.edu)

### May 22, 2023 Meeting

**Speaker: **Lauren Huie

**Title:** Collaboration Opportunities & Professional Development Initiatives

**Abstract:** This talk focuses on opportunities to build waveform design equities at AFRL through: 1) a data collection framework, and 2) professional development. In the first thrust, data collection opportunities for waveform design are discussed. Bridging the gap between theory to tested is not trivial. Modeling of complexity of what a network may encounter, due to un-intentional or intentional interference is key to characterization of performance in over the air environments. Determining the over the air performance requires a careful look at how we parameterize the physical environment and capture the state of experimental set-up. An experimental framework is described which bridges the gap between theory and practice allowing for easy entry from MATLAB to measurements. In the second thrust, professional development goals for waveform design are discussed. Desired professional technical trajectories are described, gaps identified, and collaboration discussion invited.

(Email: lauren.huie-seversky@us.af.mil)

### May 1, 2023 Meeting

**Title: **Enhancing the Security of OFDM-based Radio Interfaces using a Spread Spectrum Underlay Signal

**Speaker: **Nishith Tripathi, Virginia Tech

**Slides:** Enhancing the Security of OFDM-based Radio Interfaces using a Spread Spectrum Underlay Signal (PDF)

**Abstract: **4G and 5G use OFDM to achieve high data rates. However, the 5G NR waveform is easy to detect and hence vulnerable to an attack. This seminar discusses the use of a spread spectrum underlay signal that co-exists with an OFDM signal to enhance the security of the radio interface communications. Such underlay signal can carry sensitive traffic or critical signaling messages. The underlay signal also has a commercial use case, where the URLLC traffic is carried by the underlay signal without interrupting or degrading non-URLLC transmissions. This seminar highlights vulnerabilities of the 5G NR radio interface and describes the design of the proposed underlay signal. The performance of the proposed underlay technique is evaluated using comprehensive simulations. The simulation results prove the ability of the proposed underlay signal to transport information securely and without causing any perceptible degradation to the non-URLLC traffic.

(Email: nishith@vt.edu)

### April 10, 2023 Meeting

**Speaker: **Nishith Tripathi, Virginia Tech

**Title: **Design Considerations for Replacing the 5G NR Physical Layer

**Slides:** Design Considerations for Replacing the 5G NR Physical Layer (PDF)

**Abstract:** 5G New Radio (NR) is a flexible and high-performance radio interface. 5G NR possesses several capabilities with some specific to the OFDM-based radio interface and some independent of OFDM. This seminar discusses key capabilities of the 5G NR radio protocol stack and how Layer 2 and above support the physical layer. The seminar highlights the implications of replacing the 5G NR physical layer on L2 and L3 of the NR radio protocol stack as well as the 5G core (5GC) network. The influence of the physical layer changes on the 5G Radio Access Network architecture is also described. The implications of the PHY layer changes in the context of O-RAN are also discussed. In summary, this seminar provides a concise overview of the design implications of changing the 5G NR physical layer on the radio protocol stack, the RAN architecture, and the 5GC.

(Email: nishith@vt.edu)

### March 23, 2023 Meeting

**Speakers**: Akshay S. Bondre, ASU and Christ D. Richmond, Duke University

**Title:** Channel estimation and sensing in OTFS Modulation using 2D-MUSIC

**Slides:** Channel Estimation and Sensing for OTFS Modulation using 2D MUSIC (PDF)

**Abstract:** The OTFS waveform enables us to perform channel estimation and sensing using radar-like transmitter and receiver processing. In this talk, we consider the problem of estimating the delays and Doppler-shifts introduced by a multipath scattering environment using OTFS waveforms. We show that the received time-frequency domain signal resulting from a single OTFS pilot symbol is a superposition of 2D complex exponentials, where the “frequencies” of the complex exponentials are given by the delays and Doppler shifts corresponding to the scatterers. As a result, we can apply a 2D version of the well-known MUSIC (Multiple Signal Classification) algorithm in order to estimate the delays and Doppler shifts. Since the OTFS waveform allows separation of pilot and data symbols, the data symbols can be filtered out in the delay-Doppler domain, and the resulting pilot signal is converted to the time-frequency domain and used as the input to the 2D MUSIC algorithm. Lastly, we discuss some insights from the Cramér-Rao bound for delay-Doppler estimation.

(Emails: christ.richmond@duke.edu and asbondre@asu.edu )

### March 13, 2023 Meeting

**Speaker: **Saif Mohammed, IIT Delhi

**Title:** OTFS Based Orthogonal Multiple Access (OMA)

**Slides:** OTFS Based Orthogonal Multiple Access (OMA) (PDF)

**Abstract: **We consider OMA where the user terminals (UTs) are allocated non-overlapping resource in the delay-Doppler (DD) and/or time-frequency (TF) domain. Well known OMA methods include, i) Guard Band (GB) based MA (GBMA), ii) Interleaved Delay Doppler MA (IDDMA), iii) Interleaved Time Frequency MA (ITFMA). With ideal pulses, IDDMA and ITFMA are free from multi-user interference (MUI) and unlike GBMA they do not use guard bands in the DD/TF domain. Since ideal pulses are not realizable, we study the performance of these OMA methods with practical rectangular pulses. Our study reveals the presence of multiuser interference (MUI) when rectangular pulses are used, although the amount of MUI in IDDMA is observed to be significantly smaller than that in ITFMA. We also derive the expression for the achievable sum spectral efficiency (SE). Through simulations, for practical values of the received signal-to-noise ratio, it is observed that with rectangular pulses the sum SE achieved by the IDDMA method is significantly higher than that achieved by the ITFMA and GBMA methods.

(Email: saif.k.mohammed@ee.iitd.ac.in)

### February 27, 2023 Meeting

**Speaker: **Alireza Vahid, Rochester Institute of Technology

**Title: **Defending WiFi Networks against Control Channel Attacks

**Slides:** Defending WiFi Networks against Control Channel Attacks (PDF)

**Abstract: **Future wireless networks will provide the platform for many critical applications, such as resilient/self-healing autonomous systems, wearable health, space/ground communications, and the Internet of Things, and will connect billions of devices with vastly different characteristics. The stability, coverage, and reliability of these systems will heavily rely on small control packets. However, reliably collecting even these small packets will be a daunting challenge. Notably and for various reasons such as legacy and latency, control packets are not protected (through encryption for instance) and are susceptible to malicious activities and attacks on wireless nodes and/or links. Further, in machine-type communications, the forward payload is much smaller compared to traditional packets and becomes comparable to the control packets in size, resulting in a much higher learning overheads; and in higher frequency bands or space communications, wireless links both in forward and control channels are characterized by frequent outages and unreliability. These challenges result in intermittent, asymmetric, and even contradictory knowledge of the network status information at different users hindering communications.

In this talk, we first show how control channel vulnerabilities can be exploited by malicious users in the context of WiFi networks through we call an SNR-steal attack, which steers the beam from the intended user to the attacker. We then look at different attack scenarios (e.g., denial-of-service and spoofing attacks) and establish the limits on how much the impact of the attacks may be alleviated. We investigate potential defense strategies through devising resilient protocols that optimally harness the available control packets and quantify the resulting gains. We will also explore methods and ideas to identify potential attackers. Although we motivate the work by WiFi networks, the results apply to a broad set of wireless systems.

(Email: vahid.alireza@gmail.com)

### February 6, 2023 Meeting

**Speaker: **Michal Yemini, Princeton University

**Title: **Resilience to Malicious Activity in Distributed Optimization for Cyberphysical Systems

**Slides:** Resilience to Malicious Activity in Distributed Optimization for Cyberphysical Systems (PDF)

**Abstract:** Enhancing resilience in distributed networks in the face of malicious agents is an important problem for which many key theoretical results and applications require further development and characterization. This talk focuses on the problem of distributed optimization in multi-agent cyberphysical systems, where a legitimate agent’s dynamic is influenced both by the values it receives from potentially malicious neighboring agents, and by its own self-serving target function. We develop a new algorithmic and analytical framework to achieve resilience for the class of problems where stochastic values of trust between agents exist and can be exploited. In this case, we will show that convergence to the true global optimal point can be recovered, both in mean and almost surely, even in the presence of malicious agents. Furthermore, we will establish expected convergence rate guarantees in the form of upper bounds on the expected squared distance to the optimal value. We will conclude the talk by presenting numerical results that validate the analytical convergence guarantees we present in this talk even when the malicious agents compose the majority of agents in the network.

This talk is based on a joint work with Angelia Nedich, Stephanie Gil, and Andrea Goldsmith: https://arxiv.org/pdf/2212.02459.pdf that was presented in part to the IEEE Conference on Decision and Control, 2022.

(Email: yemini.michal@gmail.com)

### January 23, 2023 Meeting

**Speaker:** Laixi Shi, CMU

**Title:** Offline Reinforcement Learning: Towards Optimal Sample Complexity and Distributional Robustness

**Slides:** Offline Reinforcement Learning: Towards Optimal Sample Complexity and Distributional Robustness (PDF)

Abstract: Offline or batch reinforcement learning seeks to learn a near-optimal policy using history data without active exploration of the environment. To counter the insufficient coverage and sample scarcity of many offline datasets, the principle of pessimism has been recently introduced to mitigate high bias of the estimated values. However, prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality, thus posing an impediment to efficient offline RL in sample-starved applications. In this talk, we demonstrate that the model-based (or “plug-in”) approach achieves minimax-optimal sample complexity without burn-in cost for tabular Markov decision processes (MDPs). Our algorithms are “pessimistic” variants of value iteration with Bernstein-style penalties, and do not require sophisticated variance reduction. We further consider a distributionally robust formulation of offline RL, focusing on tabular robust MDPs with an uncertainty set specified by the Kullback-Leibler divergence, where again a model-based algorithm that combines distributionally robust value iteration with the principle of pessimism achieves a near-optimal sample complexity up to a polynomial factor of the effective horizon length.

(Email: laixishi@cmu.edu)

### December 5, 2022 Meeting

**Title:** GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations

**Speaker:** Enmao Diao, Duke University

**Slides:** GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations (PDF)

### October 17, 2022 Meeting

**Title:** Reliable Shared Secret Extraction through OTFS

**Speaker:** Usama Saeed, Virginia Tech

**Slides:** Reliable Shared Secret Extraction through OTFS (PDF)

### October 10, 2022 Meeting

**Slides**: Learning in the Delay-Doppler Domain (PDF)

**Speaker**: Robert Calderbank, Duke University

**Title**: Learning in the Delay-Doppler Domain

**Abstract**: We describe how pulsones interpolate between TDM and FDM, and when it is possible to learn input-output relations without learning the channel, opening the door to machine learning.

### August 29, 2022 Meeting

**Speaker: **Bowen Li, Colorado State University

**Title:** Minimax Concave Penalty Regularized Adaptive System Identification

**Abstract:** We present a recursive least square (RLS) type algorithm with a minimax concave penalty (MCP) for adaptive identification of a sparse tap-weight vector that represents a communication channel. The proposed algorithm recursively yields its estimate of the tap-vector, from noisy streaming observations of a received signal, using expectation-maximization (EM) update. We prove the convergence of our algorithm to a local optimum and provide bounds for the steady state error. Using simulation studies of Rayleigh fading channel, Volterra system and multivariate time series model, we demonstrate that our algorithm outperforms, in the mean-squared error (MSE) sense, the standard RLS and the $\ell_1$-regularized RLS.

### August 15, 2022 Meeting

**Speaker: **Usama Saeed

**Title: **Wireless Channel Models

**Abstract: **An overview of the 3GPP Clustered Delay Line (CDL) channel model. The presentation is intended to highlight the key components and use-cases of the CDL channel model against a backdrop of other channel models widely accepted in the research community. Channel models such as the 3GPP Spatial Channel Model (SCM), Tapped Delay Line (TDL) and others will be compared with CDL in order to provide context for selecting an appropriate channel model for a particular simulation setup.

### July 25, 2022 Meeting

**Speakers: **The Duke / Virginia Tech data+ student team

**Title: **Learning to Communicate

**Abstract:** The team will present an OFDM implementation in GNU Radio of Q-learning for interference avoidance

### July 18, 2022 Meeting

**Speaker: **Jiarui Xu, Virginia Tech

**Title: **Learning to Equalize OTFS

**Abstract: **Orthogonal Time Frequency Space (OTFS) is a novel framework that processes modulation symbols via a time-independent channel characterized by the delay-Doppler domain. The conventional waveform, orthogonal frequency division multiplexing (OFDM), requires tracking frequency selective fading channels over the time, whereas OTFS benefits from full time-frequency diversity by leveraging appropriate equalization techniques. In this talk, we consider a neural network-based supervised learning framework for OTFS equalization. Learning of the introduced neural network is conducted in each OTFS frame fulfilling an online learning framework: the training and testing datasets are within the same OTFS-frame over the air. Utilizing reservoir computing, a special recurrent neural network, the resulting one-shot online learning is sufficiently flexible to cope with channel variations among different OTFS frames (e.g., due to the link/rank adaptation and user scheduling in cellular networks). The proposed method does not require explicit channel state information (CSI) and simulation results demonstrate a lower bit error rate (BER) than conventional equalization methods in the low signal-to-noise (SNR) regime under large Doppler spreads. When compared with its neural network-based counterparts for OFDM, the introduced approach for OTFS will lead to a better tradeoff between the processing complexity and the equalization performance.

To learn more: Learning to Equalize OTFS

### June 27, 2022 Meeting

**Slides:** Model-Aided Data Driven Adaptive Target Detection for Channel Matrix-Based Cognitive Radar

**Speaker: **Christ Richmond, Duke University

**Title: **Model-Aided Data Driven Adaptive Target Detection for Channel Matrix-Based Cognitive Radar

**Abstract: **: Data driven-based approaches to signal processing including deep neural networks (DNN) have shown promise in various fields. Such techniques tend to require significant training for good convergence. Model-based approaches, however, provide practical data efficient solutions often with insightful and intuitive interpretations. A hybrid approach that employs data driven techniques aided by knowledge from model-based approaches may help reduce required training and improve convergence rates. This work investigates the potential of deep learning techniques to detect radar targets while accelerating the learning process via use of expert/domain knowledge from model-based algorithms for channel matrix-based cognitive radar/sonar. The channel matrices characterize responses from target and clutter/reverberation. The architecture of the proposed DNN exploits the insights from the model-based generalized likelihood ratio test (GLRT) statistic presented in our previous work, and hence, the resulting DNN algorithm benefits from the merits of both the model-based and data-driven approaches. Our proposed DNN architecture utilizes the secondary data for clutter channel estimation via the maximum-likelihood approach, and thus, requires little to no retraining with the changing clutter environment. We compare the detection performance of model-aided deep learning-based algorithms with that of traditional model-based techniques and pure data-driven DNN approaches using receiver operating characteristic (ROC) curves from Monte Carlo simulations. We also study and compare the robustness of these techniques by changing the signal-to-interference plus noise ratio (SINR), the number of targets and clutter sources, and the amount of available training data.

### June 13, 2022 Meeting

**Slides:** Simple Formula for the Moments of Unitarily Invariant Matrix Distributions (PDF)

**Papers:** A Simple Formula for the Moments of Unitarily Invariant Matrix Distributions (PDF)

**Speaker: **Ali Pezeshki, Colorado State University

**Title: **A Simple Formula for the Moments of Unitarily Invariant Matrix Distributions

**Abstract: **We derive a simple formula for computing arbitrary moments of all matrix distributions that can be transformed to a unitarily invariant distribution through conjugation by a fixed matrix. Such distributions arise in many applications in communications, radar, and sonar. The Schur-Weyl duality is used to decompose the expected value of tensor powers of the random matrices as a linear combination of projection operators onto unitary irreducible representations. The coefficients in this combination, which are labeled by Young diagrams, are expectations of products of determinants of the random matrices. In a number of important cases, including matrix gamma and matrix beta distributions, these coefficients can be simply computed from a knowledge of the normalization factors of the distributions. Our approach has the advantage that it neatly separates combinatorial aspects of the moment calculation, which are essentially the same for all distributions in the class, from the calculation of a small number of specific distribution dependent moments.

Read more: A Simple Formula for the Moments of Unitarily Invariant Matrix Distributions

### May 16, 2022 Meeting

**Slides:** Mitigating Connectivity Failures in Federated Learning via Collaborative Relaying (PDF)

**Papers:**

**Speaker: **Rajarshi Saha, Stanford University

**Title: **Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying

**Abstract: ****: **Intermittent client connectivity is one of the major challenges in centralized federated edge learning frameworks. Intermittently failing uplinks to the central parameter server (PS) can induce a large generalization gap in performance especially when the data distribution among the clients exhibits heterogeneity. In this work, to mitigate communication blockages between clients and the central PS, we introduce the concept of knowledge relaying wherein the successfully participating clients collaborate in relaying their neighbors’ local updates to a central parameter server (PS) in order to boost the participation of clients with intermittently failing connectivity. We propose a collaborative relaying based semi-decentralized federated edge learning framework where at every communication round each client first computes a local consensus of the updates from its neighboring clients and eventually transmits a weighted average of its own update and those of its neighbors to the PS. We appropriately optimize these averaging weights to reduce the variance of the global update at the PS while ensuring that the global update is unbiased, consequently improving the convergence rate. Finally, by conducting experiments on CIFAR-10 dataset we validate our theoretical results and demonstrate that our proposed scheme is superior to Federated averaging benchmark especially when data distribution among clients is non-iid.

To find out more: Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying

### May 2, 2022 Meeting

**Slides:** BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression (PDF)

**Papers:** BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression

**Speaker: **Zhize Li, Carnegie Mellon University

**Title: **BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression

**Abstract: **Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized machine learning applications in multi-agent or federated environments. To tackle the communication bottleneck, there have been many efforts to design communication-compressed algorithms for decentralized nonconvex optimization, where the clients are only allowed to communicate a small amount of quantized information (aka bits) with their neighbors over a predefined graph topology. Despite significant efforts, the state-of-the-art algorithm in the nonconvex setting still suffers from a slower rate of convergence O((G/T)^{2/3}) compared with their uncompressed counterpart, where G measures the data heterogeneity across different clients, and T is the number of communication rounds. This paper proposes BEER, which adopts communication compression with gradient tracking, and shows it converges at a faster rate of O(1/T). This significantly improves over the state-of-the-art rate, by matching the rate without compression even under arbitrary data heterogeneity. Numerical experiments are also provided to corroborate our theory and confirm the practical superiority of BEER in the data heterogeneous regime.

### April 18, 2022 Meeting

**Speaker:** Ang Li, University of Maryland

**Title: **Heterogeneity-Aware and Efficient Federated Learning

**Abstract: **The proliferation of edge devices and the gigantic amount of data they generate are distributed everywhere. Such distributed data fuel the intelligence at the edge where data reside. Federated learning is a key enabler for boosting the intelligence at the edge, but there are several critical challenges (e.g., communication cost, data heterogeneity) that hinder the development of federated learning in practice. In this talk, I will present my work on designing a personalized federated learning system that can jointly improve communication and computation efficiency. I will also outline future research directions for building intelligent next-generation wireless network with federated learning.

### April 4, 2022 Meeting

**Speaker:** Shyam Venkatasubramanian, Duke University

**Title:** Toward Data-Driven STAP Radar

**Abstract: ** Using an amalgamation of techniques from classical radar, computer vision, and deep learning, we characterize our ongoing data-driven approach to space-time adaptive processing (STAP) radar. We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region using RFView, a site-specific radio frequency modeling and simulation tool developed by ISL Inc. For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a minimum variance distortionless response (MVDR) beamformer. These heatmap tensors can be thought of as stacked images, and in an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video. Our goal is to use these images and videos to detect targets and estimate their locations, a procedure reminiscent of computer vision algorithms for object detection—namely, the Faster Region Based Convolutional Neural Network (Faster R-CNN). The Faster R-CNN consists of a proposal generating network for determining regions of interest (ROI), a regression network for positioning anchor boxes around targets, and an object classification algorithm; it is developed and optimized for natural images. Our ongoing research will develop analogous tools for heatmap images of radar data. In this regard, we will generate a large, representative adaptive radar signal processing database for training and testing, analogous in spirit to the COCO dataset for natural images. Subsequently, we will build upon, adapt, and optimize the existing Faster R-CNN framework, and develop tools to detect and localize targets in the heatmap tensors discussed previously. As a preliminary example, we present a regression network in this paper for estimating target locations to demonstrate the feasibility of and significant improvements provided by our data-driven approach.

### March 7, 2022 Meeting

**Speaker: **Ananthanarayanan Chockalingam, Indian Institute of Science, Bangalore

**Title:** Deep Neural Networks in OTFS Transceivers Design

**Abstract: **Orthogonal time frequency space (OTFS) modulation, a recently introduced modulation scheme which multiplexes information symbols in the delay-Doppler (DD) domain, has been shown to offer robust performance in high-Doppler channels – channels where OFDM fails to perform well. A key requirement in OTFS transceivers design is signal processing in the DD domain. Like in several other fields, deep learning has found application in wireless PHY layer design (e.g., design of channel codes, signal detection, channel prediction and tracking, beamforming, precoding, IQ imbalance compensation). This talk will focus on the use of deep neural networks (DNNs) for efficient design of OTFS transceivers. It will present the design and performance of a low-complexity DNN architecture for OTFS signal detection, where each information symbol multiplexed in the DD grid is associated with a separate DNN. This symbol-level DNN has fewer parameters to learn compared to a full DNN that considers all the symbols in an OTFS frame jointly. Under the assumption of standard Gaussian i.i.d. noise model, the symbol-DNN detection performance is close to the maximum-likelihood (ML) detection performance. When the noise model deviates from the standard Gaussian i.i.d. model, the DNN based detection is shown to outperform ML detection (which is optimum only when the noise is Gaussian and i.i.d.).

### February 7, 2022 Meeting

**Speaker:** Juncheng Dong, Duke University

**Title:** Blaschke Product Neural Networks (BPNN): A Physics-Infused Neural Network for Phase Retrieval of Meromorphic Functions

**Abstract: **Numerous physical systems are described by ordinary or partial differential equations whose solutions are given by holomorphic or meromorphic functions in the complex domain. In many cases, only the magnitude of these functions are observed on various points on the purely imaginary jw-axis since coherent measurement of their phases is often expensive. However, it is desirable to retrieve the lost phases from the magnitudes when possible. To this end, we propose a physics-infused deep neural network based on the Blaschke products for phase retrieval. Inspired by the Helson and Sarason Theorem, we recover coefficients of a rational function of Blaschke products using a Blaschke Product Neural Network (BPNN), based upon the magnitude observations as input. The resulting rational function is then used for phase retrieval. We compare the BPNN to conventional deep neural networks (NNs) on several phase retrieval problems, comprising both synthetic and contemporary real-world problems (e.g., metamaterials for which data collection requires substantial expertise and is time consuming). On each phase retrieval problem, we compare against a population of conventional NNs of varying size and hyperparameter settings. Even without any hyper-parameter search, we find that BPNNs consistently outperform the population of optimized NNs in scarce data scenarios, and do so despite being much smaller models. The results can in turn be applied to calculate the refractive index of metamaterials, which is an important problem in emerging areas of material science.

### January 24, 2022 Meeting

**Slides:** Randomized Subspace Embeddings (PDF)

**Papers:**

- Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication Budget
- Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms

**Speaker: **Rajarshi Saha, Stanford University

**Title:** Randomized Subspace Embeddings for Learning under Resource Constraints

**Abstract: **With the advent of big data, training and deploying large learning models under resource-constrained settings is becoming a significant challenge. This talk will focus on our ongoing work for two such scenarios.

The first part of the talk will be on Distributed Learning under Communication Constraints. In this setting, computation is off-loaded to several edge devices that are coordinated by a central server. Communication cost between the edge device and the central server is the primary bottleneck to the scalability of such distributed systems. We will see some computationally efficient algorithms that have (near)-optimal performance.

The second part of the talk will be on Model Compression, which is critical for deploying learning models on memory-constrained devices. We will first discuss information-theoretic limits of quantizing models subject to a bit-budget, and then see some practical model quantization algorithms that achieve those limits.

The central theme for both topics will be randomized subspace embedding-based quantization schemes. These schemes are agnostic to any prior information about the distribution of the input to the quantizer which is often relevant for optimizing worst-case performance. They also achieve a dimension-independent quantization error that is critical for high-dimensional learning problems.

To find out more about the first part of the talk follow the link: Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication Budget

### December 20, 2021 Meeting

**Slides: **NSF AI Institute for Edge Computing Leveraging Next Generation Networks (Athena) (PDF)

**Speaker:** Yiran Chen, Duke University

**Title: **Athena: NSF AI Institute for Edge Computing Leveraging Next Generation Networks

**Abstract:** Yiran leads ATHENA, the new NSF Institute connecting AI with Next Generation Networks, and these connections are central to the CoE. Modern mobile networks are in need of a revolution to deliver unprecedented performance promises and to empower previously impossible services while keeping their complexity and cost under control. As the flagship AI institute of computer system research program of NSF, Athena Institute capitalizes and responds to these challenges by advancing AI technologies to transform the design, operation, and service of future mobile networks through four synergistic thrusts: Networking, Computer Systems, AI, and Services. Serving as a nexus point for community, Athena also spearheads collaboration and knowledge transfer to translate its emerging technical capabilities to new business models and entrepreneurial opportunities, transforming the future competition model in both industry and research.

### December 6, 2021 Meeting

**Slides:** Communication in the Delay Doppler Domain (PDF)

**Speaker:** Ronny Hadani, Cohere Technologies and the University of Texas, Austin

**Title:** OTFS: a paradigm of communication in the delay-Doppler domain

**Abstract: **In this talk I will introduce the OTFS (Orthogonal Time Frequency and Space) modulation scheme which is based on multiplexing information QAM symbols on localized pulses in the delay-Doppler domain. I will explain the mathematical foundations of OTFS, emphasizing how the underlying structure which establishes a conceptual link between communication and Radar theory. I will show how OTFS naturally generalizes conventional time and frequency modulations such as TDM and FDM. I will also discuss the unique way OTFS waveforms couple with the wireless channel which allows the coherent combining of all the time and frequency diversity modes of the channel to maximize the received energy. Finally, I will briefly hint towards the intrinsic advantages of OTFS over multicarrier modulations for communication under high Doppler conditions and communication under strict power constraints.

### November 22, 2021 Meeting

**Slides: **Planned Remote Lab Exercises and Simulations (PDF)

**Speaker:** Carl Dietrich, Virginia Tech

**Title:** Remote Laboratory Exercises on SDR-Based Wireless Testbed

**Abstract: ** Virginia Tech has developed software that enables students and other users to control and monitor the spectrum and/or data rate of signals and communication links on a software defined radio (SDR)-based wireless testbed. The user interface runs remotely, within a standard web browser. The software enables students to control SDRs using slider controls and/or adaptive controller code that can be edited from within the web-based user interface. Further, the software framework that enables the exercises permits multiple users to control radios that coexist within the same spectrum, setting the stage for future collaborative and competitive scenarios. Virginia Tech intends to extend the underlying experiment management framework to support data logging to support experimentation for research and to interface the framework with COTS wireless devices as well as the current SDRs and custom waveform applications or flowgraphs.

### November 8, 2021 Meeting

**Slides: **OTFS Modulation: A Zak Transform Perspective (PDF)

**Speaker: **Christ Richmond, Arizona State University

**Title:** OTFS modulation: A Zak Transform Perspective

**Abstract: **Orthogonal time frequency space (OTFS) modulation has gained significant attention over the last few years as a result of its ability to compensate for delay as well as Doppler spreads in dynamic wireless communication channels. It has been mentioned in literature that OTFS is a modulation scheme based on the Zak transform, in a manner analogous to orthogonal frequency division multiplexing (OFDM) being based on the Fourier transform. In this talk, we present a simple “signals and systems” approach to understanding OTFS from a Zak transform perspective. We discuss the representation of linear time-varying (LTV) channels in the delay-Doppler domain, and the manner in which we can interpret this delay-Doppler representation as a “Zak response” of the channel, analogous to the frequency response for linear time-invariant (LTI) channels. We derive the Zak domain relationship between the input and output for an underspread LTV channel, and argue that this relationship forms the basis of OTFS modulation. The Zak domain-based interpretation of OTFS can prove to be suitable for analyzing OTFS in depth, and answering various questions regarding the spectral efficiency and other fundamental performance limits for OTFS modulation.

### October 18, 2021 Meeting

**Slides: **Trust and Resilience in Distributed Consensus Cyberphysical Systems (PDF)

**Speaker: **Michal Yemini, Princeton University

**Title:** Trust and Resilience in Distributed Consensus Cyberphysical Systems

**Abstract:** The distributed consensus problem is of core importance to many algorithms and coordinated behaviors in multi-agent systems. It is well known however, that these algorithms are vulnerable to malicious activity and that several of the existing performance guarantees for the nominal case fail in the absence of reliable cooperation. Many works have investigated the possibility of attaining resilient consensus in the face of malicious agents. This talk presents a new approach to this problem which leads to the conclusion that, under very mild conditions on the link trustworthiness estimate, the

deterministic classical bound of 1/2 of the network connectivity can be improved, and significantly more malicious agents can be tolerated.

### October 4, 2021 Meeting

**Slides: **Multi-Agent Adversarial Attacks for Multi-Channel Communications (PDF)

**Speaker: **Mohammadreza Soltani.

**Title:** Multi-Agent Adversarial Attacks for Multi-Channel Communications

**Abstract:** Recently reinforcement learning (RL) has been applied as a successful strategy for the anti-adversary paradigm for providing reliable communication in wireless communication networks. However, studying the RL-based approaches from the adversary’s perspective for designing defense mechanisms has received little attention. Additionally, RL-based approaches in an anti-adversary or adversary paradigm mostly consider single-channel communication (either channel selection or single channel power control), while multi-channel communication is more common in practice. In this presentation, we propose a multi-agent adversary system (MAAS) for modeling and analyzing adversaries in a wireless communication scenario by careful design of the reward function under realistic communication scenarios. In particular, by modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) together with the allocated power(s) without any prior knowledge of the sender strategy. Compared to the single-agent adversary (SAA), multi-agents in MAAS can achieve significant gains in signal-to-noise ratio under the same power constraints and partial observability while providing additional stability and a more efficient learning process.

### September 13, 2021 Meeting

**Speaker:** Lingjia Liu, Virginia Tech

**Title:** Deep Echo State Q-Network (DEQN) for Next Generation Wireless Networks

**Abstract: **Motivated by the recent success of deep reinforcement learning (DRL), in this talk, we adopt DRL to build an intelligent wireless network. An efficient DRL framework called deep echo state Q-network (DEQN) has been developed by adopting the echo state network (ESN) as the kernel of deep Q-networks. The associated computationally efficient training algorithms have been developed by utilizing the special structure of ESNs to achieve a good policy with limited training data. Convergence analysis of the introduced DEQN approach has been conducted to demonstrate the faster convergence of DEQN compared to that of the deep recurrent Q-network (DRQN), a popular DRL framework widely used for wireless networks. For performance evaluation, we will apply our DEQN framework under the dynamic spectrum access (DSA) and the network resource allocation/user scheduling scenarios to demonstrate the efficiency and effectiveness of our scheme as opposed to the state-of-art. We believe that the DEQN framework sheds light on the adoption of DRL techniques in next generation wireless networks.

### August 30, 2021 Meeting

**Speaker: **Lauren Huie, AFRL

**Title:** Shaking Out Robustness from Theory to Testbed

**Abstract: **Bridging the gap between theory to tested is not trivial. Modeling of complexity of what a network may encounter, due to un-intentional or intentional interference is key to characterization of performance in over the air environments. In the case of non-intentional interference due to an adversary, a few examples are given on modeling the strength of the adversary. Determining the over the air performance requires a careful look at how we parameterize the physical environment and capture the state of experimental set-up. An experimental framework is described which bridges the gap between theory and practice allowing for easy entry from MATLAB to measurements.

### August 16, 2021 Meeting

**Speaker: **Dylan Wheeler, AFRL intern

**Title:** Asynchronous SCMA Uplink Multiuser Detection with Unknown Channel Delays

**Abstract: **In recent years, there has been a surge of research regarding the development of a viable non-orthogonal multiple access (NOMA) scheme, which is seen by many as a potential solution to the problem of increasingly crowded spectral resources. NOMA schemes aim to pack more users into the system than there are orthogonal resource elements, and one scheme that has emerged as a clear frontrunner is termed sparse code multiple access (SCMA). In the uplink, SCMA involves mapping each user’s bits to a sparse codeword unique to that user, which is then spread over the orthogonal resource elements and transmitted. At the receiver, the message passing algorithm (MPA) can then be implemented to jointly detect each user’s bits, assuming synchronized reception. In this talk, we drop the assumption of synchronization, which may not be practical in many systems but is nonetheless held in the vast majority of the literature. We introduce a novel method of performing multiuser detection within an SCMA system for which each user experiences some channel delay that is *unknown* to the receiver. The proposed algorithm involves a compressed sensing step in addition to MPA, to compensate for the lack of available information. Preliminary simulations over an additive white Gaussian noise channel suggest that a favorable bit error rate can be achieved under certain SNR conditions.

### August 2, 2021 Meeting

**Speaker: **Ali Pezeshki, Colorado State University

**Title:** A General Framework for Bounding Approximate Dynamic Programming Schemes

**Abstract:** For years, there has been interest in approximation methods for solving dynamic programming problems, because of the inherent complexity in computing optimal solutions characterized by Bellman’s principle of optimality. A wide range of approximate dynamic programming (ADP) methods now exists. Examples of ADP methods are myopic schemes, roll-out schemes, and reinforcement learning schemes. It is of great interest to guarantee that the performance of an ADP scheme be at least some known fraction, say β, of optimal. In this talk, we introduce a general approach to bounding the performance of ADP methods, in this sense, in the stochastic setting. The approach is based on new results for bounding greedy solutions in string optimization problems, where one has to choose a string (ordered set) of actions to maximize an objective function. This bounding technique is inspired by submodularity theory, but submodularity is not required for establishing bounds. Instead, the bounding is based on quantifying certain notions of curvature of string functions; the smaller the curvatures the better the bound. The key insight is that any ADP scheme is a greedy scheme for some surrogate string objective function that coincides in its optimal solution and value with those of the original optimal control problem. The ADP scheme then yields to the bounding technique mentioned above, and the curvatures of the surrogate objective determine the value β of the bound. The surrogate objective and its curvatures depend on the specific ADP.

### July 19, 2021 Meeting

**Speaker: **Elizabeth Bentley, AFRL

**Title:** A Distributed Deep-Reinforcement Learning Framework for Software-Defined UAV Network Control

Abstract: Control and performance optimization of wireless networks of Unmanned Aerial Vehicles (UAVs) require scalable approaches that go beyond architectures based on centralized network controllers. At the same time, the performance of model-based optimization approaches is often limited by the accuracy of the approximations and relaxations necessary to solve UAV network control problems through convex optimization or similar techniques and by the accuracy of the channel network models used. To address these challenges, a new architectural framework to control and optimize UAV networks is developed based on Deep Reinforcement Learning (DRL). A virtualized, ‘ready-to-fly’ emulation environment is created to generate the extensive wireless data traces necessary to train DRL algorithms, which are notoriously hard to generate and collect on battery-powered UAV networks. The training environment integrates previously developed wireless protocol stacks for UAVs into the CORE/EMANE emulation tool. This ‘ready-to-fly’ virtual environment guarantees scalable collection of high-fidelity wireless traces that can be used to train DRL agents. The proposed DRL architecture enables distributed data-driven optimization, facilitates network reconfiguration, and provides a scalable solution for large UAV networks.

### June 21, 2021 Meeting

**Speaker: **Erin Tripp, AFRL

**Title:** Application-driven Structure in Nonconvex Optimization

**Abstract:** Practice has out-paced theory in many modern applications of optimization, which increasingly include highly nonconvex or non-smooth functions. However, real-world applications often entail other useful structure that can be exploited in the development of new theory and algorithms. This talk will detail some ongoing research on sparsity promoting regularization for signal and image processing as well as the convergence and generalization properties of neural networks.

## June 7, 2021 Meeting

**Speakers:** Steve Russell & Niranjan Suri, ARL

**Context:** In our CoE, the CORNET testbed @Virginia Tech is a way we are unifying spatially separate academic and AFRL staff to shake out theory in-step with experimental benchmarking. ARL is pioneering a testbed unifying its academic collaborators with their govt staff. They are extending this proving ground to lay the foundation for a joint service (army, navy, air force) collaborative testbed and have invited us to participate.

**Title:** The ARL Distributed Virtual Proving Ground (DVPG)

**Abstract:** The concept of a Distributed Virtual Proving Ground (DVPG) is one that build on initial notions from the Army’s Internet of Battlefield Things (IOBT) research. The DVPG is a network of highly distributed testbeds to enable highly virtualized, multi-site experimentation among DEVCOM ARL and its collaborators. The DVPG targets the Army’s need for advanced distributed modeling and simulation, its need to accelerate converged experimental innovation, and need for complex datasets and an environment where basic research can be explored and evaluated. It is intended to be a fully-instrumented capability to execute collective sensing experimentation and evaluation, with mobility and spectrum effects, over broad geographically-distributed range and stand-off. The planned talk will introduce the DVPG concept and provide details on specific capabilities that are available to partner organizations.

### April 5, 2021 Meeting

**Speaker: **Jeff Reed, Virginia Tech

**Title:** 5G Standardization and Satellites

**Abstract: **The 3GPP organization which standardizes cellular systems such as 3G, 4G, and 5G is currently looking at extending 5G’s reach to Non-Terrestrial Communications (NTC). While this work is proceeding with study groups, there are many challenges in extending the 5G waveform and the overall network architecture. This presentation will discuss the technical issues faced by 3GPP to standardize NTC, the timetable for standardization, and the anticipated interoperability issues and network architectures.

### March 22, 2021 Meeting

**Slides: **6G Wireless -Illuminating New Directions in Waveform Design (PDF)

**Speaker:** Robert Calderbank, Duke University

**Title:** 6G Wireless – Illuminating New Directions in Waveform Design

**Abstract: **The world of wireless communications is changed rapidly, and I will look back at GSM, CDMA, and OFDM, and describe how these technologies were developed in response to demanding use cases. I will then try to look forward at use cases motivating 6G wireless, such as drones, explore what might be possible with OFDM, and what might be more difficult. This will motivate a discussion of OTFS (Orthogonal Time Frequency Space), a physical layer technology that can be architected as an OFDM overlay.

### March 8, 2021 Meeting

**Slides: **Wireless System Design Using Optimization and Machine Learning (PDF)

**Speaker:** Andrea Goldsmith, Princeton University

**Title:** Wireless System Design using Optimization and Machine Learning

**Abstract: ** Design and analysis of communication systems have traditionally relied on mathematical and statistical channel models that describe how a signal is corrupted during transmission. In particular, communication techniques such as modulation, coding and detection that mitigate performance degradation due to channel impairments are based on such channel models and, in some cases, instantaneous channel state information about the model. However, there are propagation environments where this approach does not work well because the underlying physical channel is too complicated, poorly understood, or rapidly time-varying. In these scenarios we propose completely new approaches to detection in the communication received based on optimization and machine learning (ML). In this approach, the detection algorithm utilizes tools from optimization and ML. We present results for three communication design problems where the optimization and ML approaches results in better performance than current state-of-the-art techniques: Blind Massive MIMO detection, signal detection without accurate channel state information, and signal detection without a mathematical channel model. Broader application of optimization and ML to communication system design in general and to millimeter wave communication systems in particular is also discussed.

### February 22, 2021 Meeting

**Speaker:** Lingjia Liu, Virginia Tech

**Title:** Learning with Knowledge of Structure: A Neural Network-Based Approach for MIMO-OFDM Detection

**Abstract: **We explore neural network-based strategies for performing symbol detection in a MIMO-OFDM system. Building on a reservoir computing (RC)-based approach towards symbol detection, we introduce a symmetric and decomposed binary decision neural network to take advantage of the structure knowledge inherent in the MIMO-OFDM system. To be specific, the binary decision neural network is added in the frequency domain utilizing the knowledge of the constellation. We show that the introduced symmetric neural network can decompose the original M-ary detection problem into a series of binary classification tasks, thus significantly reducing the neural network detector complexity while offering good generalization performance with limited training overhead. Numerical evaluations demonstrate that the introduced hybrid RC-binary decision detection framework performs close to maximum likelihood model-based symbol detection methods in terms of symbol error rate in the low SNR regime with imperfect channel state information (CSI).

### February 8, 2021 Meeting

**Speaker:** Hoda Bidkhori, University of Pittsburgh

**Title:** Robust Multi-Agent AI for Contested Environments

**Abstract:** Many Air Force problems, such as protecting communications networks against adversaries in contested environments can be formulated as games between adversaries and defenders. The challenge is that these games, their states and actions are not fully known/observable in practice and need to be learned in real-time based online observations.

In this talk, we propose several frameworks to address these challenging problems. We propose robust learning and optimization frameworks to solve the decision-making problems confronted by real-time, non-stationary, and incomplete data (potential missing data). Furthermore, we employ neural networks to address more complex settings and propose Deep reinforcement learning to learn the system’s variables, states, and objectives and propose practical solutions.

This is joint work with Vahid Tarokh.

### January 25, 2021 Meeting

**Speaker:** Ali Pezeshki, Colorado State University

**Title:** A Sense-Learn-Adapt Framework for Communication in Contested Environments

**Abstract:** We discuss an approach to developing a *sense-learn-adapt* framework for communication in contested environments, characterized by adversarial interference. This framework involves three interrelated problems. (1) *Sensing *the adversary: At any given time, the adversary has an estimate of the *state* of the friendly communication assets. This state might be the subspace or collection of subspaces (in space-wavenumber-frequency) that the friendly assets communicate over. Given this estimate, the adversary generates interference (e.g., by pouring powder into a specific subspace) to impede the communication of friendly assets. Sensing the adversary involves estimating the estimate that the adversary has from the state of the friendly assets, given the observations of the actions (generated interference) by the adversary. (2) *Learning* the adversary: This amounts to determining whether the adversary is *cognitive*; that is, whether or not it chooses its actions (e.g., interference subspace) based on a constrained optimization problem, and if so what the corresponding utility function is. One enabling tool here might be the *theory of revealed preferences* from microeconomics. (3) *Adapting* the friendly assets: Given the utility function of the adversary and its sequence of actions, the problem is to adapt the communication subspaces of friendly assets to confuse the adversary, while achieving a desired rate. Our actions here, in abstract, might take the form of selecting subspaces that are parameterized by waveforms, beam and/or frequency allocations, and/or the geometry of communication assets. We discuss one example formulation of these three steps in a simple setting. But the main aim of this talk is to discuss a principled approach and seek collaborators for various extensions of these ideas, rather than to present results for specific scenarios. The framework discussed here is inspired by and builds on recent work of Krishnamurthy et al. on adversarial cognitive radar.

### January 11, 2021 Meeting

**Slides: **Bounds on Bearing, Symbol, and Channel Estimation under Model Misspecification

**Speakers: **Akshay S. Bondre, Touseef Ali, and Christ D. Richmond, Arizona State University

**Title:** Bounds on Bearing, Symbol, and Channel Estimation Under Model Misspecification

**Abstract:** The constrained Cramér-Rao bound (CRB) has been used successfully to study parameter estimation in flat fading scenarios, and to establish the value of side information such as known waveform properties (e.g. constant modulus) and known training symbols. There are classes of communication links, however, that may be subject to highly dynamic changes, and this could cause the assumed data model to be an inaccurate model of the channel. Therefore, the constrained misspecified CRB is considered to explore the impact of model mismatch for such communication links. Specifically, quantifying the loss in estimation performance when one assumes the channel is stationary when it is not is of interest. As we explore the application of machine/deep learning to dynamic channels, measures such as the constrained MCRB may help to lend insights into convergence rates, benefits of transfer learning, and the level of fidelity/complexity required to achieve desired performance.

### November 30, 2020 Meeting

**Slides:** How to Stop Worrying about Ill-Conditioning in Low-Rank Matrix Estimation (PDF)

**Speaker:** Yuejie Chi, CMU

**Title:** Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent

**Abstract:** Low-rank matrix estimation is a canonical problem that finds numerous applications in signal processing, machine learning and imaging science. A popular approach in practice is to factorize the matrix into two compact low-rank factors, and then optimize these factors directly via simple iterative methods such as gradient descent and alternating minimization. Despite non-convexity, recent literatures have shown that these simple heuristics in fact achieve linear convergence when initialized properly for a growing number of problems of interest. However, upon closer examination, existing approaches can still be computationally expensive especially for ill-conditioned matrices: the convergence rate of gradient descent depends linearly on the condition number of the low-rank matrix, while the per-iteration cost of alternating minimization is often prohibitive for large matrices.

The goal of this paper is to set forth a competitive algorithmic approach dubbed Scaled Gradient Descent (ScaledGD) which can be viewed as pre-conditioned or diagonally-scaled gradient descent, where the pre-conditioners are adaptive and iteration-varying with a minimal computational overhead. With tailored variants for low-rank matrix sensing, robust principal component analysis and matrix completion, we theoretically show that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number of the low-rank matrix similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent. To the best of our knowledge, ScaledGD is the first algorithm that provably has such properties over a wide range of low-rank matrix estimation tasks.

### November 16, 2020 Meeting

**Slides: **DNN-Based Power Amplifier Pre-Distortion for Communications in Contested Environments (PDF)

**Papers**

- Variational Inference with Normalizing Flows (PDF)
- Normalizing Flows for Probabilistic Modeling and Inference (PDF)

**Contact:** Yi Feng, Ph.D.

**Title:** Power Amplifier Predistortion via Reversible Deep Neural Networks

**Abstract: **Hardware limitations may be key issues in the efficient communications in contested environments. In particular, power amplifiers (PA) is a key element that must be considered. In practice, there may always exist inherent non-linearities in power amplifiers causing signal constellation compression and bandwidth growth. In this work, we design a digital pre-distorter to compensate these non-linearities. Inspired by the idea of Normalizing Flows, we propose a reversible Deep Neural Network (DNN) based architecture and construct digital pre-distorters for mitigation of the non-linearities. Our approach gives significant linearization improvements over state of the art. Simulations are presented demonstrating these significant improvements.