The CoVar Zeitgeist: February, 2024¶
A curated list of the latest research in AI/ML.
Featured¶
- Large Language Models Relearn Removed Concepts
Investigates how LLMs manage to relearn concepts after the neurons associated with those concepts are deleted. Finds that LLMs move advanced concepts to earlier layers and put the relearned concepts at semantically similar neurons.
- Escalation Risks from Language Models in Military and Diplomatic Decision-Making
If LLMs are granted decision making authority, how dangerous would they be? This paper designs a wargame and lets LLMs play it to test whether they escalate. They do. A lot. Nuclear, even. Not surprising, but highlights the risks.
- Discovering group dynamics in synchronous time series via hierarchical recurrent switching-state models
Time-series paper with a co-author from US Army CCDC Soldier Center. Learns the behavior of individual actors which are coordinated in some latent process, e.g. a squad of soldiers in a training practice. Uses explainable Bayesian parametric methods rather than difficult-to-explain neural methods. In their case study, the model learns that one particular soldier got assigned the job of looking around to make sure the squad wasn’t getting approached unnoticed.
- High-dimensional analysis of double descent for linear regression with random projections
Demonstrates that double descent occurs in linear regression settings as well as deep learning.
- Reinforcement Learning for SAR View Angle Inversion with Differentiable SAR Renderer
Uses a differentiable SAR renderer in a deep reinforcement learning algorithm to solve the inverse problem in SAR imagery - predicting incident and azimuth angle given an observation. Assumes it knows the target type.
- Reliability Analysis of Complex Systems using Subset Simulations with Hamiltonian Neural Networks
This paper analyzes large complex systems for chance of failure. In particular, it uses a Monte Carlo simulator with subset simulation to simulate the chance of different parts of the system failing.
LLMs¶
- Large Language Models Relearn Removed Concepts
Investigates how LLMs manage to relearn concepts after the neurons associated with those concepts are deleted. Finds that LLMs move advanced concepts to earlier layers and put the relearned concepts at semantically similar neurons.
- Escalation Risks from Language Models in Military and Diplomatic Decision-Making
If LLMs are granted decision making authority, how dangerous would they be? This paper designs a wargame and lets LLMs play it to test whether they escalate. They do. A lot. Nuclear, even. Not surprising, but highlights the risks.
- SKILL-MIX: A FLEXIBLE AND EXPANDABLE FAMILY OF EVALUATIONS FOR AI MODELS
How to evaluate LLMs? This paper proposes having a list of skills an LLM can do, randomly combining some subset of them, and asking the LLM to perform the resulting task. The intuition is that the new task will not be in the LLMs training set.
- In-Context Learning for Extreme Multi-Label Classification
Can LLMs handle classification tasks with lots (10,000) of classes? This paper says it can, and in a zero-shot context.
- Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM
Finds that mixtures of small LLMs are competitive with single trillion-parameter LLMs.
- Teaching Algorithmic Reasoning via In-context Learning
Finds that proper use of in-context learning can greatly improve an LLMs ability to reason.
- Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models
A big comparison of lots of LLMs trying to do reasoning. Gemini is about as good as GPT3.5. GPT4 is still on top.
Object Detection¶
- Reinforcement Learning for SAR View Angle Inversion with Differentiable SAR Renderer
Uses a differentiable SAR renderer in a deep reinforcement learning algorithm to solve the inverse problem in SAR imagery - predicting incident and azimuth angle given an observation. Assumes it knows the target type.
- Simulation Based Bayesian Optimization
Introduces a Bayesian optimization method for acquisition functions which requires sampling from the posterior.
Theory¶
- Reliability Analysis of Complex Systems using Subset Simulations with Hamiltonian Neural Networks
This paper analyzes large complex systems for chance of failure. In particular, it uses a Monte Carlo simulator with subset simulation to simulate the chance of different parts of the system failing.
- A Survey on Statistical Theory of Deep Learning: Approximation, Training Dynamics, and Generative Models
Review paper on deep learning. Three sections: risk, training, generative models. Worth reading
- Decision Transformer: Reinforcement Learning via Sequence Modeling
Reinforcement learning is useful, but finnicky and difficult to implement. Investigates if we could do reinforcement learning with transformers instead. Models sequence of past state, actions, and rewards as an autoregressive trajectory and plugs into a transformer. Seems to give decent results.
- High-dimensional analysis of double descent for linear regression with random projections
Demonstrates that double descent occurs in linear regression settings as well as deep learning.
- Movement of insurgent gangs: A Bayesian kernel density model for incomplete temporal data
Uses Bayesian models to predict the movement of insurgent gangs. Worked with Indian police. Incorporates “expert priors” into sequentially updating model.
Knowledge Graphs¶
- GRAPH2TAC: LEARNING HIERARCHICAL REPRESENTATIONS OF MATH CONCEPTS IN THEOREM PROVING
This paper is developing a programming language that can assist mathematicians with creating math proofs. Fuses together a kNN and a graph neural net to help.
- Learning Big Logical Rules by Joining Small Rules
Notes that existing reasoners struggle with large rules, and proposes a novel reasoner which learns large rules by combining multiple small rules together, handling as many as 100 small rules at once.
- Capturing Knowledge Graphs and Rules with Octagon Embeddings
Uses octogan embeddings (in N^2 space where N is the dimension of your knowledge graph embedding) to improve inference in knowledge graphs. Improves performance.
- Knowledge Graphs Evolution and Preservation
Dealing with time in KGs is difficult. This is a look at approaches.
- AI Thought
Claims that the differential computer is the only known way to leap forward in super intelligence! Or it’s at least some way that a network can use working long and short term memory. See also A Cognitive Architecture for Machine Consciousness and Artificial Superintelligence: Thought Is Structured by the Iterative Updating of Working Memory
Logistics¶
- Predictive Analysis for Optimizing Port Operations
An analysis of how long ships stay in port using classical and off-the-shelf methods. Claims that this is an emerging field.
- Traffic estimation in unobserved network locations using data-driven macroscopic models
Uses network flow theory to learn transportation patterns, especially in unobserved locations. Macroscopic models make it “completely iterable”, and uses neural nets to learn specific parameters.
Applications¶
- Discovering group dynamics in synchronous time series via hierarchical recurrent switching-state models
Time-series paper with a co-author from US Army CCDC Soldier Center. Learns the behavior of individual actors which are coordinated in some latent process, e.g. a squad of soldiers in a training practice. Uses explainable Bayesian parametric methods rather than difficult-to-explain neural methods. In their case study, the model learns that one particular soldier got assigned the job of looking around to make sure the squad wasn’t getting approached unnoticed.
- Spatio-temporal data fusion for the analysis of in situ and remote sensing data using the INLA-SPDE approach
Predicts harmful algae blooms by using a hierarchical Bayesian model to align ground-level and satellite data. Postulates the existence of a latent spatiotemporal process as a Gaussian Random Field and models it, using INLA for computational efficiency.
New Models¶
- DeepSeek LLM Scaling Open-Source Language Models with Longtermism
New LLM from DeepSeek. Investigates scaling laws with LLMs by assembling a 2 trillion token dataset of english and chinese characters. This seems to depend on a lot of things, e.g. batch size, learning rate, and dataset.
- DeepSeek code
Link to github. Open source.