TMSAi Summer School 2025: Large Language Model Recap

Written By: Jeff Price | Jul 24, 2025 2:56:41 PM





The last time I spoke to TMSA at length about AI was about 18 months ago—which is like a decade in AI years. A lot has happened in 2025 in AI development.

In this and future posts, I'll be diving into the latest and greatest AI models and apps, in what I'm calling TMSAi Summer School 2025. I don't claim to know everything or to have tested every tool out there, but I'll share from my experience what could potentially be most useful for you working in sales and marketing within the transportation and logistics sector.

IMHO, using AI is no longer optional—it's a must-have skill to increase your productivity, become more effective, and ultimately drive more business and create more value for your organization.

My Bonafides

I've been using AI in transportation marketing since mid-2022 to assist with various marketing tasks, regularly reducing time to completion by up to 90 percent. At JAXPORT, we've used AI in Trailblazer award-winning campaigns. AI is not just allowing us to save time on what we already do – it’s also enabling us to accomplish things we didn’t have the resources to do previously, such as revealing new leads from our data.

I build AI agent apps in my spare time and have open-sourced a few, including a sales coach, ebook writer, and more. I believe we are living in the best timeline, when you can build anything you can dream of for relatively little cost and at great speed.

LLM Recap

There are LLMs (large language models) that predict tokens (like words) when prompted. Then there are apps that allow you to interact with those models. Some are free, some paid.

Below is a table of the state-of-the-art or frontier AI LLMs, their providers, what I see as their competitive advantage, and a few use cases for sales and marketing, accurate as of July 2025.

Company HQ Location Latest Model Name (July 2025) App / Platform Competitive Advantage Common Use Cases
OpenAI USA GPT 4.1, GPT-4o, o3, o4-mini, o4-mini-high ChatGPT State-of-the-art reasoning and knowledge (GPT series); closed-source, proprietary Content generation, customer support, research assistance
Anthropic USA Claude 4 (Opus, Sonnet) Claude (web & API) Strong safety & alignment focus; fine-tuned for high-quality conversations Enterprise AI assistants, ethical AI applications
Google USA Gemini 2.5 Pro and Flash Gemini (Vertex AI, etc.) Multimodal capabilities (text, images); tightly integrated with Google's ecosystem Creative content generation, visual analysis, enhanced search and productivity tools
Meta USA Llama 4 Scout & Maverick Various (open-source) Large open-source LLMs, highly customizable Academic research, custom app integration, industry-specific fine-tuning
xAI (X) USA Grok 4 (with "Grok 4 Heavy" variant) Grok (X/Twitter & app) Real-time access to X (Twitter) data; unique witty tone and less filtered style Social media management, real-time engagement, playful/humorous marketing campaigns
DeepSeek China R1 DeepSeek API & platforms Advanced chain-of-thought reasoning; open-source and cost-effective deployment Complex problem-solving, educational tutoring, research analysis
Alibaba China Qwen 3 Alibaba Cloud "Hybrid" reasoning modes (fast vs. deep thinking); strong multilingual support Global business applications, multilingual customer service, content localization
Moonshot AI China Kimi K2 Kimi State-of-the-art reasoning and coding via 1 trillion-parameter Mixture-of-Experts Advanced research assistance, educational tools, complex reasoning tasks
Mistral AI France Mistral Large and family of models Various (open-source) European open model with permissive use Cost-effective enterprise solutions, domain-specific customization, EU-compliant deployments

New models are released every few months, each with the potential to be better than the last. The speed of release is one reason I don't hold much stock in a technical AIO/GEO play, which aims to optimize your content for AI models. More on that in a future post.

Prompting Still Matters in 2025

For non-reasoning models and for comprehensive work, my most effective prompts are usually between 1,000 and 2,000 words long. I don't write most of that—the AI does—in a framework I've developed: the instant expert. There are several prompting frameworks, but I like and use this one for more serious and consistent work, because of its simplicity and focus on a persona.

**TLDR: Give the model a role and task.**

The framework:

Decide what you want the model to output. Then ask the AI these questions:

  • What makes a great [output]?
  • What are several frameworks used to produce a great [output]?
  • What are the job titles of someone who produces that [output]?
  • What are the characteristics of a world-class [job title]?

Then assemble your prompt:

You are a world-class [job title].

You have these skills: [skill characteristics]

You know that a great [output] has these attributes: [output attributes]

You also know the popular frameworks used to produce [output].

[TASK]

Produce the [output], given the following context: [context]

[/TASK]

 

You'll have to edit the answers from the AI slightly to fit the prompt, but you can assemble one of these prompts in just a few minutes. Getting in the habit of prompting with a role + task will help as you move into instructing AI agents to handle automated workflows. Yep, more on that in a future post.

If you're wondering what "context" means... That's the extra information the AI should know to help it complete the task. If you're asking the AI to write a first draft of a Company Page Post on LinkedIn, it might help to know the publishing brand's tone of voice, style, no-go words, target audience, and goal of the post.

Think of a task you might give to your smart intern. You know they're capable, but they might not even know the questions to ask to get the right info to complete the job. You provide them with a healthy amount of instruction so they can be set up for success as they go to work on the assignment. Similarly, you can provide the AI with helpful information so you can get a better output.

Prompting for Reasoning Models

With the release of "thinking" or "reasoning" models like o3 or Gemini 2.5 Pro—which are gen AI models that reason through a problem or issue before delivering what you asked for—prompting has evolved.

For these prompts, I tend to go light on the persona with a one-sentence role, then load it up with relevant context that would be helpful for the model to complete the task I'm assigning. I'm usually using reasoning models for coding tasks, so as an example, I might provide several other parts of the codebase as context in the prompt. In another example, for business development, you might provide performance data as context in the development of a go-to-market strategy.

Privacy and Data Sharing

Keep in mind that any information you share with a model provider may be used to train future versions of the model. So, marketing info is often fair game, but sensitive information or personal information should not be entered into a public app. When you want to work with AI on something that’s sensitive, I would offer this alternative:

Using a spreadsheet as an example, rather than uploading the spreadsheet, you can build a tool to analyze the information on your own computer without exposing the data to a third party. You would share with the AI the column headers—as long as those are not sensitive—and a couple records of dummy data. Then you could ask the AI to write a Python script to perform the analysis across that dataset. Then run the script on your own computer to complete the work, thereby preserving the privacy of the data and leveraging AI for efficiency and insight.

At JAXPORT, we use good ol' American AI models and apps, but I recognize that some businesses and organizations have blocked the use of ChatGPT or other AI apps due to legitimate risks of leaking proprietary information. In these instances, just know that there are some low-cost, easy-to-deploy LLMs that your staff can use that do not share information with third parties.

I offer the following info to help inform your conversation with your IT partners:

  • On any Mac with an M1 chip or later that has at least 8GB of memory, or on a PC with a dedicated graphics card that has 8GB of VRAM or more (think gaming computer), you can run an AI model on the computer at no cost and without an internet connection. Ollama, LM Studio, and Anything LLM are all apps that you can install to chat with AI on your own machine. The trade-off is that the smaller models that can run on consumer-grade hardware are not as powerful and not as fast as the state-of-the-art models offered by OpenAI, Google, etc. But using local models does offer 100% privacy and are completely free.
  • An enterprise-grade solution might be to host an open-source model on premises and offer access to your employees via your business' network, like your intranet.
  • Another option would be to use Microsoft Azure or AWS with their AI model provider partners in a dedicated contract that guarantees inputs and outputs would not be used to train future versions of the respective model.

How I Used AI in this Blog Post

I wrote this blog post primarily the old-fashioned way, outlining and writing. I used AI for some specific tasks, including:

  • Assembling the list of AI models and apps via research tools by Grok, Perplexity and OpenAI
  • Providing helpful feedback and edits by Sonnet 4
  • Creating the transportation vehicle illustrations in the featured image by ChatGPT

Stay tuned for the next installment of TMSAi Summer School, where we'll dive into AI agents and build our first workflow.

 

 

Tags: , ,

Keep Reading

Strategy

Your Customer: What have you Done for me lately?

If I were to ask you to think of a memorable customer experience, what comes to mind? I’m willing to bet a bad experience or two pops into your head, along with maybe a really excellent one. There...

Sales

Four Winning Sales Strategies that endure beyond Covid-19

“Adaptability is about the powerful difference between adapting to cope and adapting to win.” - Max McKeown We’ve seen a lot of commentary in sales professional networking groups these last 6 weeks...

Digital

10 Reasons Google Hates Your Website - And What You Can Do About It

By Chris Peer, Owner & CEO of SyncShow. Peer presented "10 Reasons Google Hates Your Transportation & Logistics Website - And What You Can Do About It" at the 2018 Logistics Marketing & Sales...

Contact Us