Reasoning Agents: Evolve Past Signal-Based Thinking
We’ve got a bone to pick with signal-based prospecting. In theory, it’s a great practice to get teams away from templatizing messaging: pick signals, build emails that align to these combinations, send more contextually relevant messages to the buyer.
In practice, it leaves a lot to be desired as a receiver of the emails.
We covered some of the problems in a previous blog. But, let’s combine this thinking with a new emerging problem - having too many signals.

First, let’s define signal based outbound more clearly.
A signal can be a a number of things:
- A custom sourced datapoint (ex. a GPT analysis of a website scrape that tells you if they have a self-serve motion, the types of customers they serve via a logo analysis, if they list SOC2 compliance on the website, etc.)
- A purchased trigger event (ex. hiring, news articles, an exec leadership change, M&A, etc.)
- An intent signal (ex. someone at the company is on the pricing page, looking at competitors on G2, someone researching a relevant topic, etc.)
- A first party data point (ex. an asset download, product usage, contract renewal due date, request for demo, etc)
The common thinking is to either merge these into templates or AI powered content blocks that adjust the messaging. The common thinking is the signal becomes the reason for reaching out.
More advanced teams using signals have built out more intricate workflows that combine these signals with richer segmentation (ex. role, company size, or industry) and even additional context (ex. call notes, additional custom sourced data points like team size) to create more nuanced messaging.
Problem 1: Limited Context
The first problem, which was covered in our first blog on the topic, is the issue of limited context. If you had all the time in the world to write someone an email, a great seller wouldn’t search for a single signal and then start writing.
They’d contextualize this along with other research.
Who is receiving the email? What’s their background? What do you think their personality is? What posts are they engaging with on social media? What about folks on their team? Have we engaged them before?
What is the company? What market do they serve? What is going on in their market? What’s going on at the company?
All of this would be thought through and patterned matched against conversations had in the past with similar “feeling” prospects.
The signal might be the thing the outreach hinges on… it might not be. The risk in using a limited scope of signals is that you likely run into buyers where the context is insufficient.
Focusing on a hiring signal... while they are still trying to wrap their heads around a recent acquisition.
Assuming a pain point is important... while their new boss just joined from a company with a historically very different approach.
Approaching them like a novice on a topic (say the signal was they have a PLG motion) when they have decades of experience (missed their past experience).
Missing the past conversations with the company, key initiatives mentioned, reasons they’ve churned before, etc… these are all contextual clues that could be overlooked if you’re just designing a campaign on a few triggers without thoughtfully integrating the full context of the prospect.
Problem 2: Signal Sprawl
So, let’s say we solve the limited context problem. We pull all of the signals together. We make everything available to sellers, go to market engineers (GTMEs), marketers, and operations.
It’s a tad overwhelming!
For sellers, they’re going to be drowning in decision fatigue. Overwhelmed by options, human bias will start to show. Sellers will make mental shortcuts. Instead of sitting down and thinking through the information, they’ll have conversations in their own heads like: "I know how to connect these dots to an email", "I've seen this work before", etc.
We love sellers... but this is rarely objectively tied to what is best suited to work.
For RevOps and GTMEs, this creates a challenge when paired with most tools in the market. Most tools are built to aggregate and create segmented workflows that require heavy user prompting.
The infinite web of possible connections might feel productive, but there’s a simpler way to approach this problem; agentic reasoning.
This can 100% be prompted by teams. Despite it being a core feature within Ora, we encourage it. If there’s a way you’d want to see the aggregated research synthesized, this gives you scalable control of creating an understanding across the signals.
When prompting this, remember to give the LLM rules for how to prioritize the information and rules for what the expected output should look like. This creates a level of standardization across the outputs.
This is a great way to keep sellers from being overwhelmed, and a great way to simplify the workflow sprawl that becomes a never ending hydra for operations and GTMEs to tackle.
Problem 3: Guesswork
Even if you’re teaching an LLM how to reason, that reasoning is going to be based on a subjective “mega-prompt” for how you think the output should look. It’s not based on an understanding of what works with each persona in the nuanced situations surfaced.
Play building has the same problem. While it can be a great way to test new motions, the reality is you’re likely just guessing what might work. If you’re not building thoughtful experiments within your plays, you could be left with more questions than answers when a play works… or doesn’t.
In order to effectively reason through all of the signals, variables, etc. you need an objective understanding of what makes content work that is deeper than reply rates, opens and clicks. You need to understand the nuance of the content itself to be compared against the results and their accompanying signals and variables.
How you position a value prop about speeding up workflows to an operations persona should be dependent on all the available context and its ultimate results.
ACI: The Solution Behind Ora
This is what lies at the heart of Lavender’s Augmented Communication Intelligence, ACI. When you launch an Ora agent, you can feed it whatever custom 1st or 3rd party data you like, we’ll pair it with Ora’s own research and reason through it based on our understanding of the campaign *and* our understanding of the content and its relative performance.
This is the AI we were promised when we saw a world of autonomous agents helping sellers do more with less. While workflow spaghetti was a great AI 1.0, the future requires a richer synthesis of all of the available context - including what works in the content itself.
If you want to test this for yourself, build agents for free in Ora to test what is possible. (prompt engineering expertise not required)





