Air Force Center of Excellence

Air Force logoAir Force Office of Air Force Research Laboratory/Air Force Office of Scientific Research University Center of Excellence: Agile Waveform Design for Communication Networks in Contested Environments Research University Center of Excellence

Developing AI-informed communication and networking protocols

Rhodes iiD researchers are working with colleagues across the nation to ensure that future communication protocols used by the United States Air Force are suitable for handling the most data-heavy tasks imaginable, such as flying UAVs, and secure from adversarial attack.

The project is led by Robert Calderbank, the Charles S. Sydnor Distinguished Professor of Computer Science, Electrical and Computer Engineering, and Mathematics, and director of the Rhodes Information Initiative at Duke, and Vahid Tarokh, the Rhodes Family Professor of Electrical and Computer Engineering.

The new center also draws in research expertise from Virginia Tech, Princeton University, Carnegie Mellon University, Colorado State University, and Arizona State University.

The project will deepen existing collaborations between the universities involved and the Air Force Research Laboratory (AFRL). By tackling this new challenge, the researchers will increase the capabilities, knowledge, skills, and expertise of the AFRL workforce, while giving its staff opportunities to work with a large pipeline of talented students through programs like Data+ and Code+, both ten-week summer research experiences that pair mixed teams of Duke undergraduate and graduate students with real-life data sets and problems from partnering companies.

October 10, 2022 Meeting

Slides: Learning in the Delay-Doppler Domain (PDF)

Speaker: Robert Calderbank, Duke University

Title: Learning in the Delay-Doppler Domain

Abstract: We describe how pulsones interpolate between TDM and FDM, and when it is possible to learn input-output relations without learning the channel, opening the door to machine learning.

August 29, 2022 Meeting

Speaker: Bowen Li, Colorado State University

Title: Minimax Concave Penalty Regularized Adaptive System Identification

Abstract: We present a recursive least square (RLS) type algorithm with a minimax concave penalty (MCP) for adaptive identification of a sparse tap-weight vector that represents a communication channel. The proposed algorithm recursively yields its estimate of the tap-vector, from noisy streaming observations of a received signal, using expectation-maximization (EM) update. We prove the convergence of our algorithm to a local optimum and provide bounds for the steady state error. Using simulation studies of Rayleigh fading channel, Volterra system and multivariate time series model, we demonstrate that our algorithm outperforms, in the mean-squared error (MSE) sense, the standard RLS and the $\ell_1$-regularized RLS.

 

August 15, 2022 Meeting

Speaker: Usama Saeed

Title: Wireless Channel Models

Abstract: An overview of the 3GPP Clustered Delay Line (CDL) channel model. The presentation is intended to highlight the key components and use-cases of the CDL channel model against a backdrop of other channel models widely accepted in the research community. Channel models such as the 3GPP Spatial Channel Model (SCM), Tapped Delay Line (TDL) and others will be compared with CDL in order to provide context for selecting an appropriate channel model for a particular simulation setup.

 

July 25, 2022 Meeting

Speakers: The Duke / Virginia Tech data+ student team

Title: Learning to Communicate

Abstract: The team will present an OFDM implementation in GNU Radio of Q-learning for interference avoidance

 

July 18, 2022 Meeting

Speaker: Jiarui Xu, Virginia Tech

Title: Learning to Equalize OTFS

Abstract: Orthogonal Time Frequency Space (OTFS) is a novel framework that processes modulation symbols via a time-independent channel characterized by the delay-Doppler domain. The conventional waveform, orthogonal frequency division multiplexing (OFDM), requires tracking frequency selective fading channels over the time, whereas OTFS benefits from full time-frequency diversity by leveraging appropriate equalization techniques. In this talk, we consider a neural network-based supervised learning framework for OTFS equalization. Learning of the introduced neural network is conducted in each OTFS frame fulfilling an online learning framework: the training and testing datasets are within the same OTFS-frame over the air. Utilizing reservoir computing, a special recurrent neural network, the resulting one-shot online learning is sufficiently flexible to cope with channel variations among different OTFS frames (e.g., due to the link/rank adaptation and user scheduling in cellular networks). The proposed method does not require explicit channel state information (CSI) and simulation results demonstrate a lower bit error rate (BER) than conventional equalization methods in the low signal-to-noise (SNR) regime under large Doppler spreads. When compared with its neural network-based counterparts for OFDM, the introduced approach for OTFS will lead to a better tradeoff between the processing complexity and the equalization performance.

To learn more: https://ieeexplore.ieee.org/abstract/document/9745801?casa_token=uqatc7kq-fYAAAAA:2aQEWso03mvvtiAwX1qKq40Mz5ZY0z4zvg3qBuwDS1BxCrJsfZey8SDs0Mwt0wLMch0TbJCCeys

 

June 27, 2022 Meeting

Slides: Model-Aided Data Driven Adaptive Target Detection for Channel Matrix-Based Cognitive Radar

Speaker: Christ Richmond, Duke University

Title: Model-Aided Data Driven Adaptive Target Detection for Channel Matrix-Based Cognitive Radar

Abstract: : Data driven-based approaches to signal processing including deep neural networks (DNN) have shown promise in various fields.  Such techniques tend to require significant training for good convergence.  Model-based approaches, however, provide practical data efficient solutions often with insightful and intuitive interpretations.  A hybrid approach that employs data driven techniques aided by knowledge from model-based approaches may help reduce required training and improve convergence rates.  This work investigates the potential of deep learning techniques to detect radar targets while accelerating the learning process via use of expert/domain knowledge from model-based algorithms for channel matrix-based cognitive radar/sonar.  The channel matrices characterize responses from target and clutter/reverberation.  The architecture of the proposed DNN exploits the insights from the model-based generalized likelihood ratio test (GLRT) statistic presented in our previous work, and hence, the resulting DNN algorithm benefits from the merits of both the model-based and data-driven approaches.  Our proposed DNN architecture utilizes the secondary data for clutter channel estimation via the maximum-likelihood approach, and thus, requires little to no retraining with the changing clutter environment.  We compare the detection performance of model-aided deep learning-based algorithms with that of traditional model-based techniques and pure data-driven DNN approaches using receiver operating characteristic (ROC) curves from Monte Carlo simulations.  We also study and compare the robustness of these techniques by changing the signal-to-interference plus noise ratio (SINR), the number of targets and clutter sources, and the amount of available training data.

 

June 13, 2022 Meeting

Slides: Simple Formula for the Moments of Unitarily Invariant Matrix Distributions (PDF)

Papers: A Simple Formula for the Moments of Unitarily Invariant Matrix Distributions (PDF)

Speaker: Ali Pezeshki, Colorado State University

Title: A Simple Formula for the Moments of Unitarily Invariant Matrix Distributions

Abstract: We derive a simple formula for computing arbitrary moments of all matrix distributions that can be transformed to a unitarily invariant distribution through conjugation by a fixed matrix. Such distributions arise in many applications in communications, radar, and sonar. The Schur-Weyl duality is used to decompose the expected value of tensor powers of the random matrices as a linear combination of projection operators onto unitary irreducible representations. The coefficients in this combination, which are labeled by Young diagrams, are expectations of products of determinants of the random matrices. In a number of important cases, including matrix gamma and matrix beta distributions, these coefficients can be simply computed from a knowledge of the normalization factors of the distributions. Our approach has the advantage that it neatly separates combinatorial aspects of the moment calculation, which are essentially the same for all distributions in the class, from the calculation of a small number of specific distribution dependent moments.

Read more: https://ieeexplore.ieee.org/abstract/document/9747218

 

May 16, 2022 Meeting

Slides: Mitigating Connectivity Failures in Federated Learning via Collaborative Relaying (PDF)

Papers:

Speaker: Rajarshi Saha, Stanford University

Title: Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying

Abstract: Intermittent client connectivity is one of the major challenges in centralized federated edge learning frameworks. Intermittently failing uplinks to the central parameter server (PS) can induce a large generalization gap in performance especially when the data distribution among the clients exhibits heterogeneity. In this work, to mitigate communication blockages between clients and the central PS, we introduce the concept of knowledge relaying wherein the successfully participating clients collaborate in relaying their neighbors’ local updates to a central parameter server (PS) in order to boost the participation of clients with intermittently failing connectivity. We propose a collaborative relaying based semi-decentralized federated edge learning framework where at every communication round each client first computes a local consensus of the updates from its neighboring clients and eventually transmits a weighted average of its own update and those of its neighbors to the PS. We appropriately optimize these averaging weights to reduce the variance of the global update at the PS while ensuring that the global update is unbiased, consequently improving the convergence rate. Finally, by conducting experiments on CIFAR-10 dataset we validate our theoretical results and demonstrate that our proposed scheme is superior to Federated averaging benchmark especially when data distribution among clients is non-iid.

To find out more, follow the link: https://arxiv.org/abs/2202.11850

 

May 2, 2022 Meeting

Slides: BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression (PDF)

Papers: BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression

Speaker: Zhize Li, Carnegie Mellon University

Title: BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression

Abstract: Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized machine learning applications in multi-agent or federated environments. To tackle the communication bottleneck, there have been many efforts to design communication-compressed algorithms for decentralized nonconvex optimization, where the clients are only allowed to communicate a small amount of quantized information (aka bits) with their neighbors over a predefined graph topology. Despite significant efforts, the state-of-the-art algorithm in the nonconvex setting still suffers from a slower rate of convergence O((G/T)^{2/3}) compared with their uncompressed counterpart, where G measures the data heterogeneity across different clients, and T is the number of communication rounds. This paper proposes BEER, which adopts communication compression with gradient tracking, and shows it converges at a faster rate of O(1/T). This significantly improves over the state-of-the-art rate, by matching the rate without compression even under arbitrary data heterogeneity. Numerical experiments are also provided to corroborate our theory and confirm the practical superiority of BEER in the data heterogeneous regime.

 

April 18, 2022 Meeting

Speaker: Ang Li, University of Maryland

Title: Heterogeneity-Aware and Efficient Federated Learning

Abstract: The proliferation of edge devices and the gigantic amount of data they generate are distributed everywhere.  Such distributed data fuel the intelligence at the edge where data reside. Federated learning is a key enabler for boosting the intelligence at the edge, but there are several critical challenges (e.g., communication cost, data heterogeneity) that hinder the development of federated learning in practice. In this talk, I will present my work on designing a personalized federated learning system that can jointly improve communication and computation efficiency. I will also outline future research directions for building intelligent next-generation wireless network with federated learning.

 

April 4, 2022 Meeting

Speaker: Shyam Venkatasubramanian, Duke University

Title: Toward Data-Driven STAP Radar

Abstract:  Using an amalgamation of techniques from classical radar, computer vision, and deep learning, we characterize our ongoing data-driven approach to space-time adaptive processing (STAP) radar. We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region using RFView, a site-specific radio frequency modeling and simulation tool developed by ISL Inc. For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a minimum variance distortionless response (MVDR) beamformer. These heatmap tensors can be thought of as stacked images, and in an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video. Our goal is to use these images and videos to detect targets and estimate their locations, a procedure reminiscent of computer vision algorithms for object detection—namely, the Faster Region Based Convolutional Neural Network (Faster R-CNN). The Faster R-CNN consists of a proposal generating network for determining regions of interest (ROI), a regression network for positioning anchor boxes around targets, and an object classification algorithm; it is developed and optimized for natural images. Our ongoing research will develop analogous tools for heatmap images of radar data. In this regard, we will generate a large, representative adaptive radar signal processing database for training and testing, analogous in spirit to the COCO dataset for natural images. Subsequently, we will build upon, adapt, and optimize the existing Faster R-CNN framework, and develop tools to detect and localize targets in the heatmap tensors discussed previously. As a preliminary example, we present a regression network in this paper for estimating target locations to demonstrate the feasibility of and significant improvements provided by our data-driven approach.

 

March 7, 2022 Meeting

Speaker: Ananthanarayanan Chockalingam, Indian Institute of Science, Bangalore

Title: Deep Neural Networks in OTFS Transceivers Design

Abstract: Orthogonal time frequency space (OTFS) modulation, a recently introduced modulation scheme which multiplexes information symbols in the delay-Doppler (DD) domain, has been shown to offer robust performance in high-Doppler channels – channels where OFDM fails to perform well. A key requirement in OTFS transceivers design is signal processing in the DD domain. Like in several other fields, deep learning has found application in wireless PHY layer design (e.g., design of channel codes, signal detection, channel prediction and tracking, beamforming, precoding, IQ imbalance compensation). This talk will focus on the use of deep neural networks (DNNs) for efficient design of OTFS transceivers. It will present the design and performance of a low-complexity DNN architecture for OTFS signal detection, where each information symbol multiplexed in the DD grid is associated with a separate DNN. This symbol-level DNN has fewer parameters to learn compared to a full DNN that considers all the symbols in an OTFS frame jointly. Under the assumption of standard Gaussian i.i.d. noise model, the symbol-DNN detection performance is close to the maximum-likelihood (ML) detection performance. When the noise model deviates from the standard Gaussian i.i.d. model, the DNN based detection is shown to outperform ML detection (which is optimum only when the noise is Gaussian and i.i.d.).

 

February 7, 2022 Meeting

Speaker: Juncheng Dong, Duke University

Title:  Blaschke Product Neural Networks (BPNN): A Physics-Infused Neural Network for Phase Retrieval of Meromorphic Functions

Abstract: Numerous physical systems are described by ordinary or partial differential equations whose solutions are given by holomorphic or meromorphic functions in the complex domain. In many cases, only the magnitude of these functions are observed on various points on the purely imaginary jw-axis since coherent measurement of their phases is often expensive. However, it is desirable to retrieve the lost phases from the magnitudes when possible. To this end, we propose a physics-infused deep neural network based on the Blaschke products for phase retrieval. Inspired by the Helson and Sarason Theorem, we recover coefficients of a rational function of Blaschke products using a Blaschke Product Neural Network (BPNN), based upon the magnitude observations as input. The resulting rational function is then used for phase retrieval. We compare the BPNN to conventional deep neural networks (NNs) on several phase retrieval problems, comprising both synthetic and contemporary real-world problems (e.g., metamaterials for which data collection requires substantial expertise and is time consuming). On each phase retrieval problem, we compare against a population of conventional NNs of varying size and hyperparameter settings. Even without any hyper-parameter search, we find that BPNNs consistently outperform the population of optimized NNs in scarce data scenarios, and do so despite being much smaller models. The results can in turn be applied to calculate the refractive index of metamaterials, which is an important problem in emerging areas of material science.

 

January 24, 2022 Meeting

Slides: Randomized Subspace Embeddings (PDF)

Papers:

Speaker: Rajarshi Saha, Stanford University

Title: Randomized Subspace Embeddings for Learning under Resource Constraints

Abstract: With the advent of big data, training and deploying large learning models under resource-constrained settings is becoming a significant challenge. This talk will focus on our ongoing work for two such scenarios.

The first part of the talk will be on Distributed Learning under Communication Constraints. In this setting, computation is off-loaded to several edge devices that are coordinated by a central server. Communication cost between the edge device and the central server is the primary bottleneck to the scalability of such distributed systems. We will see some computationally efficient algorithms that have (near)-optimal performance.

The second part of the talk will be on Model Compression, which is critical for deploying learning models on memory-constrained devices. We will first discuss information-theoretic limits of quantizing models subject to a bit-budget, and then see some practical model quantization algorithms that achieve those limits.

The central theme for both topics will be randomized subspace embedding-based quantization schemes. These schemes are agnostic to any prior information about the distribution of the input to the quantizer which is often relevant for optimizing worst-case performance. They also achieve a dimension-independent quantization error that is critical for high-dimensional learning problems.

To find out more about the first part of the talk follow the link: https://arxiv.org/abs/2103.07578

 

December 20, 2021 Meeting

Slides: NSF AI Institute for Edge Computing Leveraging Next Generation Networks (Athena) (PDF)

Speaker: Yiran Chen, Duke University

Title: Athena: NSF AI Institute for Edge Computing Leveraging Next Generation Networks

Abstract: Yiran leads ATHENA, the new NSF Institute connecting AI with Next Generation Networks, and these connections are central to the CoE. Modern mobile networks are in need of a revolution to deliver unprecedented performance promises and to empower previously impossible services while keeping their complexity and cost under control. As the flagship AI institute of computer system research program of NSF, Athena Institute capitalizes and responds to these challenges by advancing AI technologies to transform the design, operation, and service of future mobile networks through four synergistic thrusts: Networking, Computer Systems, AI, and Services. Serving as a nexus point for community, Athena also spearheads collaboration and knowledge transfer to translate its emerging technical capabilities to new business models and entrepreneurial opportunities, transforming the future competition model in both industry and research.

 

December 6, 2021 Meeting

Slides: Communication in the Delay Doppler Domain (PDF)

Speaker: Ronny Hadani, Cohere Technologies and the University of Texas, Austin

Title: OTFS: a paradigm of communication in the delay-Doppler domain

Abstract: In this talk I will introduce the OTFS (Orthogonal Time Frequency and Space) modulation scheme which is based on multiplexing information QAM symbols on localized pulses in the delay-Doppler domain. I will explain the mathematical foundations of OTFS, emphasizing how the underlying structure which establishes a conceptual link between communication and Radar theory. I will show how OTFS naturally generalizes conventional time and frequency modulations such as TDM and FDM. I will also discuss the unique way OTFS waveforms couple with the wireless channel which allows the coherent combining of all the time and frequency diversity modes of the channel to maximize the received energy. Finally, I will briefly hint towards the intrinsic advantages of OTFS over multicarrier modulations for communication under high Doppler conditions and communication under strict power constraints.

 

November 22, 2021 Meeting

Slides: Planned Remote Lab Exercises and Simulations (PDF)

Speaker: Carl Dietrich, Virginia Tech

Title:  Remote Laboratory Exercises on SDR-Based Wireless Testbed

Abstract:  Virginia Tech has developed software that enables students and other users to control and monitor the spectrum and/or data rate of signals and communication links on a software defined radio (SDR)-based wireless testbed.  The user interface runs remotely, within a standard web browser.  The software enables students to control SDRs using slider controls and/or adaptive controller code that can be edited from within the web-based user interface.  Further, the software framework that enables the exercises permits multiple users to control radios that coexist within the same spectrum, setting the stage for future collaborative and competitive scenarios.  Virginia Tech intends to extend the underlying experiment management framework to support data logging to support experimentation for research and to interface the framework with COTS wireless devices as well as the current SDRs and custom waveform applications or flowgraphs.

 

November 8, 2021 Meeting

Slides: OTFS Modulation: A Zak Transform Perspective (PDF)

Speaker:  Christ Richmond, Arizona State University

Title: OTFS modulation: A Zak Transform Perspective

Abstract: Orthogonal time frequency space (OTFS) modulation has gained significant attention over the last few years as a result of its ability to compensate for delay as well as Doppler spreads in dynamic wireless communication channels. It has been mentioned in literature that OTFS is a modulation scheme based on the Zak transform, in a manner analogous to orthogonal frequency division multiplexing (OFDM) being based on the Fourier transform. In this talk, we present a simple “signals and systems” approach to understanding OTFS from a Zak transform perspective. We discuss the representation of linear time-varying (LTV) channels in the delay-Doppler domain, and the manner in which we can interpret this delay-Doppler representation as a “Zak response” of the channel, analogous to the frequency response for linear time-invariant (LTI) channels. We derive the Zak domain relationship between the input and output for an underspread LTV channel, and argue that this relationship forms the basis of OTFS modulation. The Zak domain-based interpretation of OTFS can prove to be suitable for analyzing OTFS in depth, and answering various questions regarding the spectral efficiency and other fundamental performance limits for OTFS modulation.

 

October 18, 2021 Meeting

Slides: Trust and Resilience in Distributed Consensus Cyberphysical Systems (PDF)

Speaker: Michal Yemini, Princeton University

Title:  Trust and Resilience in Distributed Consensus Cyberphysical Systems

Abstract: The distributed consensus problem is of core importance to many algorithms and coordinated behaviors in multi-agent systems. It is well known however, that these algorithms are vulnerable to malicious activity and that several of the existing performance guarantees for the nominal case fail in the absence of reliable cooperation. Many works have investigated the possibility of attaining resilient consensus in the face of malicious agents. This talk presents a new approach to this problem which leads to the conclusion that, under very mild conditions on the link trustworthiness estimate, the
deterministic classical bound of 1/2 of the network connectivity can be improved, and significantly more malicious agents can be tolerated.

 

October 4, 2021 Meeting

Slides: Multi-Agent Adversarial Attacks for Multi-Channel Communications (PDF)

Speaker: Mohammadreza Soltani.

Title: Multi-Agent Adversarial Attacks for Multi-Channel Communications

Abstract: Recently reinforcement learning (RL) has been applied as a successful strategy for the anti-adversary paradigm for providing reliable communication in wireless communication networks. However, studying the RL-based approaches from the adversary’s perspective for designing defense mechanisms has received little attention. Additionally, RL-based approaches in an anti-adversary or adversary paradigm mostly consider single-channel communication (either channel selection or single channel power control), while multi-channel communication is more common in practice. In this presentation, we propose a multi-agent adversary system (MAAS) for modeling and analyzing adversaries in a wireless communication scenario by careful design of the reward function under realistic communication scenarios. In particular, by modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) together with the allocated power(s) without any prior knowledge of the sender strategy. Compared to the single-agent adversary (SAA), multi-agents in MAAS can achieve significant gains in signal-to-noise ratio under the same power constraints and partial observability while providing additional stability and a more efficient learning process.

 

September 13, 2021 Meeting

Speaker: Lingjia Liu, Virginia Tech

Title:   Deep Echo State Q-Network (DEQN) for Next Generation Wireless Networks

Abstract: Motivated by the recent success of deep reinforcement learning (DRL), in this talk, we adopt DRL to build an intelligent wireless network. An efficient DRL framework called deep echo state Q-network (DEQN) has been developed by adopting the echo state network (ESN) as the kernel of deep Q-networks. The associated computationally efficient training algorithms have been developed by utilizing the special structure of ESNs to achieve a good policy with limited training data. Convergence analysis of the introduced DEQN approach has been conducted to demonstrate the faster convergence of DEQN compared to that of the deep recurrent Q-network (DRQN), a popular DRL framework widely used for wireless networks. For performance evaluation, we will apply our DEQN framework under the dynamic spectrum access (DSA) and the network resource allocation/user scheduling scenarios to demonstrate the efficiency and effectiveness of our scheme as opposed to the state-of-art. We believe that the DEQN framework sheds light on the adoption of DRL techniques in next generation wireless networks.

 

August 30, 2021 Meeting

Speaker: Lauren Huie, AFRL

Title:  Shaking Out Robustness from Theory to Testbed

Abstract:  Bridging the gap between theory to tested is not trivial.   Modeling of complexity of what a network may encounter, due to un-intentional or intentional interference is key to characterization of performance in over the air environments.  In the case of non-intentional interference due to an adversary, a few examples are given on modeling the strength of the adversary.   Determining the over the air performance requires a careful look at how we parameterize the physical environment and capture the state of experimental set-up.  An experimental framework is described which bridges the gap between theory and practice allowing for easy entry from MATLAB to measurements.

 

August 16, 2021 Meeting

Speaker: Dylan Wheeler, AFRL intern

Title:  Asynchronous SCMA Uplink Multiuser Detection with Unknown Channel Delays

Abstract:  In recent years, there has been a surge of research regarding the development of a viable non-orthogonal multiple access (NOMA) scheme, which is seen by many as a potential solution to the problem of increasingly crowded spectral resources. NOMA schemes aim to pack more users into the system than there are orthogonal resource elements, and one scheme that has emerged as a clear frontrunner is termed sparse code multiple access (SCMA). In the uplink, SCMA involves mapping each user’s bits to a sparse codeword unique to that user, which is then spread over the orthogonal resource elements and transmitted. At the receiver, the message passing algorithm (MPA) can then be implemented to jointly detect each user’s bits, assuming synchronized reception. In this talk, we drop the assumption of synchronization, which may not be practical in many systems but is nonetheless held in the vast majority of the literature. We introduce a novel method of performing multiuser detection within an SCMA system for which each user experiences some channel delay that is unknown to the receiver. The proposed algorithm involves a compressed sensing step in addition to MPA, to compensate for the lack of available information. Preliminary simulations over an additive white Gaussian noise channel suggest that a favorable bit error rate can be achieved under certain SNR conditions.

 

August 2, 2021 Meeting

Speaker: Ali Pezeshki, Colorado State University

Title:  A General Framework for Bounding Approximate Dynamic Programming Schemes

Abstract:   For years, there has been interest in approximation methods for solving dynamic programming problems, because of the inherent complexity in computing optimal solutions characterized by Bellman’s principle of optimality. A wide range of approximate dynamic programming (ADP) methods now exists. Examples of ADP methods are myopic schemes, roll-out schemes, and reinforcement learning schemes. It is of great interest to guarantee that the performance of an ADP scheme be at least some known fraction, say β, of optimal. In this talk, we introduce a general approach to bounding the performance of ADP methods, in this sense, in the stochastic setting. The approach is based on new results for bounding greedy solutions in string optimization problems, where one has to choose a string (ordered set) of actions to maximize an objective function. This bounding technique is inspired by submodularity theory, but submodularity is not required for establishing bounds. Instead, the bounding is based on quantifying certain notions of curvature of string functions; the smaller the curvatures the better the bound. The key insight is that any ADP scheme is a greedy scheme for some surrogate string objective function that coincides in its optimal solution and value with those of the original optimal control problem. The ADP scheme then yields to the bounding technique mentioned above, and the curvatures of the surrogate objective determine the value β of the bound. The surrogate objective and its curvatures depend on the specific ADP.

 

July 19, 2021 Meeting

Speaker: Elizabeth Bentley, AFRL

Title: A Distributed Deep-Reinforcement Learning Framework for Software-Defined UAV Network Control

Abstract: Control and performance optimization of wireless networks of Unmanned Aerial Vehicles (UAVs) require scalable approaches that go beyond architectures based on centralized network controllers. At the same time, the performance of model-based optimization approaches is often limited by the accuracy of the approximations and relaxations necessary to solve UAV network control problems through convex optimization or similar techniques and by the accuracy of the channel network models used. To address these challenges, a new architectural framework to control and optimize UAV networks is developed based on Deep Reinforcement Learning (DRL).  A virtualized, ‘ready-to-fly’ emulation environment is created to generate the extensive wireless data traces necessary to train DRL algorithms, which are notoriously hard to generate and collect on battery-powered UAV networks. The training environment integrates previously developed wireless protocol stacks for UAVs into the CORE/EMANE emulation tool. This ‘ready-to-fly’ virtual environment guarantees scalable collection of high-fidelity wireless traces that can be used to train DRL agents. The proposed DRL architecture enables distributed data-driven optimization, facilitates network reconfiguration, and provides a scalable solution for large UAV networks.

 

June 21, 2021 Meeting

Speaker: Erin Tripp, AFRL

Title:  Application-driven Structure in Nonconvex Optimization

Abstract:  Practice has out-paced theory in many modern applications of optimization, which increasingly include highly nonconvex or non-smooth functions. However, real-world applications often entail other useful structure that can be exploited in the development of new theory and algorithms. This talk will detail some ongoing research on sparsity promoting regularization for signal and image processing as well as the convergence and generalization properties of neural networks.

 

June 7, 2021 Meeting

Speakers:  Steve Russell & Niranjan Suri, ARL

Context: In our CoE, the CORNET testbed @Virginia Tech is a way we are unifying spatially separate academic and AFRL staff to shake out theory in-step with experimental benchmarking.   ARL is pioneering a testbed unifying its academic collaborators with their govt staff. They are extending this proving ground to lay the foundation for a joint service (army, navy, air force) collaborative testbed and have invited us to participate.

Title:  The ARL Distributed Virtual Proving Ground (DVPG)

Abstract:  The concept of a Distributed Virtual Proving Ground (DVPG) is one that build on initial notions from the Army’s Internet of Battlefield Things (IOBT) research.   The DVPG is a network of  highly distributed testbeds to enable highly virtualized, multi-site experimentation among DEVCOM ARL and its collaborators.  The DVPG targets the Army’s need for advanced distributed modeling and simulation, its need to accelerate converged experimental innovation, and need for complex datasets and an environment where basic research can be explored and evaluated.  It is intended to be a fully-instrumented capability to execute collective sensing experimentation and evaluation, with mobility and spectrum effects, over broad geographically-distributed range and stand-off.  The planned talk will introduce the DVPG concept and provide details on specific capabilities that are available to partner organizations.

 

April 5, 2021 Meeting

Speaker: Jeff Reed, Virginia Tech

Title: 5G Standardization and Satellites

Abstract: The 3GPP organization which standardizes cellular systems such as 3G, 4G, and 5G is currently looking at extending 5G’s reach to Non-Terrestrial Communications (NTC). While this work is proceeding with study groups, there are many challenges in extending the 5G waveform and the overall network architecture. This presentation will discuss the technical issues faced by 3GPP to standardize NTC, the timetable for standardization, and the anticipated interoperability issues and network architectures.

 

March 22, 2021 Meeting

Slides: 6G Wireless -Illuminating New Directions in Waveform Design (PDF)

Speaker: Robert Calderbank, Duke University

Title: 6G Wireless – Illuminating New Directions in Waveform Design

Abstract: The world of wireless communications is changed rapidly, and I will look back at GSM, CDMA, and OFDM, and describe how these technologies were developed in response to demanding use cases. I will then try to look forward at use cases motivating 6G wireless, such as drones,  explore what might be possible with OFDM, and what might be more difficult. This will motivate a discussion of OTFS (Orthogonal Time Frequency Space), a physical layer technology that can be architected as an OFDM overlay.

 

March 8, 2021 Meeting

Slides: Wireless System Design Using Optimization and Machine Learning (PDF)

Speaker: Andrea Goldsmith, Princeton University

Title:  Wireless System Design using Optimization and Machine Learning

Abstract:  Design and analysis of communication systems have traditionally relied on mathematical and statistical channel models that describe how a signal is corrupted during transmission. In particular, communication techniques such as modulation, coding and detection that mitigate performance degradation due to channel impairments are based on such channel models and, in some cases, instantaneous channel state information about the model. However, there are propagation environments where this approach does not work well because the underlying physical channel is too complicated, poorly understood, or rapidly time-varying. In these scenarios we propose completely new approaches to detection in the communication received based on optimization and machine learning (ML). In this approach, the detection algorithm utilizes tools from optimization and ML. We present results for three communication design problems where the optimization and ML approaches results in better performance than current state-of-the-art techniques: Blind Massive MIMO detection, signal detection without accurate channel state information, and signal detection without a mathematical channel model. Broader application of optimization and ML to communication system design in general and to millimeter wave communication systems in particular is also discussed.

 

February 22, 2021 Meeting

Speaker: Lingjia Liu, Virginia Tech

Title: Learning with Knowledge of Structure: A Neural Network-Based Approach for MIMO-OFDM Detection

Abstract: We explore neural network-based strategies for performing symbol detection in a MIMO-OFDM system. Building on a reservoir computing (RC)-based approach towards symbol detection, we introduce a symmetric and decomposed binary decision neural network to take advantage of the structure knowledge inherent in the MIMO-OFDM system. To be specific, the binary decision neural network is added in the frequency domain utilizing the knowledge of the constellation. We show that the introduced symmetric neural network can decompose the original M-ary detection problem into a series of binary classification tasks, thus significantly reducing the neural network detector complexity while offering good generalization performance with limited training overhead. Numerical evaluations demonstrate that the introduced hybrid RC-binary decision detection framework performs close to maximum likelihood model-based symbol detection methods in terms of symbol error rate in the low SNR regime with imperfect channel state information (CSI).

 

February 8, 2021 Meeting

Speaker: Hoda Bidkhori, University of Pittsburgh

Title:   Robust Multi-Agent AI for Contested Environments

Abstract: Many Air Force problems, such as protecting communications networks against adversaries in contested environments can be formulated as games between adversaries and defenders. The challenge is that these games, their states and actions are not fully known/observable in practice and need to be learned in real-time based online observations.

In this talk, we propose several frameworks to address these challenging problems. We propose robust learning and optimization frameworks to solve the decision-making problems confronted by real-time, non-stationary, and incomplete data (potential missing data). Furthermore, we employ neural networks to address more complex settings and propose Deep reinforcement learning to learn the system’s variables, states, and objectives and propose practical solutions.

This is joint work with Vahid Tarokh.

 

January 25, 2021 Meeting

Speaker: Ali Pezeshki, Colorado State University

Title:  A Sense-Learn-Adapt Framework for Communication in Contested Environments

Abstract:  We discuss an approach to developing a sense-learn-adapt framework for communication in contested environments, characterized by adversarial interference. This framework involves three interrelated problems. (1) Sensing the adversary: At any given time, the adversary has an estimate of the state of the friendly communication assets. This state might be the subspace or collection of subspaces (in space-wavenumber-frequency) that the friendly assets communicate over. Given this estimate, the adversary generates interference (e.g., by pouring powder into a specific subspace) to impede the communication of friendly assets. Sensing the adversary involves estimating the estimate that the adversary has from the state of the friendly assets, given the observations of the actions (generated interference) by the adversary. (2) Learning the adversary: This amounts to determining whether the adversary is cognitive; that is, whether or not it chooses its actions (e.g., interference subspace) based on a constrained optimization problem, and if so what the corresponding utility function is. One enabling tool here might be the theory of revealed preferences from microeconomics. (3) Adapting the friendly assets: Given the utility function of the adversary and its sequence of actions, the problem is to adapt the communication subspaces of friendly assets to confuse the adversary, while achieving a desired rate. Our actions here, in abstract, might take the form of selecting subspaces that are parameterized by waveforms, beam and/or frequency allocations, and/or the geometry of communication assets. We discuss one example formulation of these three steps in a simple setting. But the main aim of this talk is to discuss a principled approach and seek collaborators for various extensions of these ideas, rather than to present results for specific scenarios. The framework discussed here is inspired by and builds on recent work of Krishnamurthy et al. on adversarial cognitive radar.

 

January 11, 2021 Meeting

Slides: Bounds on Bearing, Symbol, and Channel Estimation under Model Misspecification

Speakers: Akshay S. Bondre, Touseef Ali, and Christ D. Richmond, Arizona State University

Title:  Bounds on Bearing, Symbol, and Channel Estimation Under Model Misspecification

Abstract:  The constrained Cramér-Rao bound (CRB) has been used successfully to study parameter estimation in flat fading scenarios, and to establish the value of side information such as known waveform properties (e.g. constant modulus) and known training symbols.  There are classes of communication links, however, that may be subject to highly dynamic changes, and this could cause the assumed data model to be an inaccurate model of the channel.  Therefore, the constrained misspecified CRB is considered to explore the impact of model mismatch for such communication links.  Specifically, quantifying the loss in estimation performance when one assumes the channel is stationary when it is not is of interest.  As we explore the application of machine/deep learning to dynamic channels, measures such as the constrained MCRB may help to lend insights into convergence rates, benefits of transfer learning, and the level of fidelity/complexity required to achieve desired performance.

 

November 30, 2020 Meeting

Slides: How to Stop Worrying about Ill-Conditioning in Low-Rank Matrix Estimation (PDF)

Speaker: Yuejie Chi, CMU

Title: Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent

Abstract: Low-rank matrix estimation is a canonical problem that finds numerous applications in signal processing, machine learning and imaging science. A popular approach in practice is to factorize the matrix into two compact low-rank factors, and then optimize these factors directly via simple iterative methods such as gradient descent and alternating minimization. Despite non-convexity, recent literatures have shown that these simple heuristics in fact achieve linear convergence when initialized properly for a growing number of problems of interest. However, upon closer examination, existing approaches can still be computationally expensive especially for ill-conditioned matrices: the convergence rate of gradient descent depends linearly on the condition number of the low-rank matrix, while the per-iteration cost of alternating minimization is often prohibitive for large matrices.

The goal of this paper is to set forth a competitive algorithmic approach dubbed Scaled Gradient Descent (ScaledGD) which can be viewed as pre-conditioned or diagonally-scaled gradient descent, where the pre-conditioners are adaptive and iteration-varying with a minimal computational overhead. With tailored variants for low-rank matrix sensing, robust principal component analysis and matrix completion, we theoretically show that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number of the low-rank matrix similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent.  To the best of our knowledge, ScaledGD is the first algorithm that provably has such properties over a wide range of low-rank matrix estimation tasks.

November 16, 2020 Meeting

Slides: DNN-Based Power Amplifier Pre-Distortion for Communications in Contested Environments (PDF)

Papers

Contact: Yi Feng, Ph.D.

Title: Power Amplifier Predistortion via Reversible Deep Neural Networks

Abstract: Hardware limitations may be key issues in the efficient communications in contested environments. In particular, power amplifiers (PA) is a key element that must be considered. In practice, there may always exist inherent non-linearities in power amplifiers causing signal constellation compression and bandwidth growth. In this work, we design a digital pre-distorter to compensate these non-linearities. Inspired by the idea of Normalizing Flows, we propose a reversible Deep Neural Network (DNN) based architecture and construct digital pre-distorters for mitigation of the non-linearities. Our approach gives significant linearization improvements over state of the art. Simulations are presented demonstrating these significant improvements.