Monthly Update

Summary

  • Turbulence Part Deux
  • Tariff Beauty, In the Eye of the Beholder
  • Shake Up at the Dept of Education
  • Reality or Fantasy: AI Explained
  • More Reading on LLMs

Turbulence Part Deux


Our January Market Update highlighted “beneath the surface” turbulence in the financial markets. In February, that turbulence emerged, pushing stock prices lower – and pressure accelerated into early March.


The US stock market hit a new all-time high on February 19. But the mood in the US financial markets, which had been generally positive since election day, shifted due to investors’ nervousness driven by:


  1. The lofty prices of big tech stocks
  2. The possibility of slower economic growth
  3. Repercussions related to trade policy and tariffs


Since mid-February, technology shares have been leading the way lower for US stocks. From the February peak to the close of trading on March 10, the Nasdaq 100 technology-focused stock index declined by 12.9%.


Declines in the broader stock market have been less pronounced. The S&P 500 Index of large company stocks has declined by 8.6% from the mid-February peak.


Interestingly, foreign-company stocks have held up relatively well, with the benchmark MSCI EAFE Index of 21 developed countries (excluding the US and Canada) rising by about 1% since February 19.


The repercussions related to trade policy and tariffs are unclear at this point because the policies themselves are unclear, with the administration taking an on-again, off-again approach to both policy formulation and implementation.


However, consumers’ concerns amped up in February. Worries about the future direction of the US economy were displayed in two monthly surveys:


The University of Michigan’s Index of Consumer Sentiment for February declined sharply from the January reading, and inflation expectations took a big step up from January. This monthly survey polls at least 500 individuals each month from across the US.


A separate survey of consumers, conducted by The Conference Board (which interviews approximately 300 consumers each month) showed similar results: a steep decline in consumer confidence, marking the third straight decline, and an increase in inflation expectations.


Below is a chart showing the Consumer Confidence Index, published on The Conference Board’s website. The blue line shows the history of the Consumer Confidence Index. The grey bars indicate periods of economic recession.

According to The Conference Board, a reading below 80 generally indicates a potential recession is ahead, based on consumers’ short-term outlook for income, business, and labor market conditions. Currently, the data is healthy distance from signaling recession.


A senior economist of global indicators at The Conference Board had this to say about February’s survey: “References to inflation and prices in general continue to rank high in write-in responses. Most notably, comments on the current administration and its policies dominated the responses.”


Without clarity on policies that will affect jobs and the cost of goods and services, consumer concern likely will stay high, and consumer activity may downshift.


Because consumers make up the largest component of the US economy – consumer spending accounts for roughly two-thirds of Gross Domestic Product, or GDP – financial market participants smell trouble, which has pressured stocks.


For the month of February, large-company foreign stocks (MSCI EAFE Index) came out on top and gained 3.1%. US investment-grade bond returns (Bloomberg Aggregate Bond Index) also were appealing, with a gain of 2.2%.


Large-company US stock returns, measured by the S&P 500 Index, declined by 1% in February. Technology shares, measured by the Nasdaq 100 Index fell by 2.7%, and Small-Company US Stocks fell by 4.8% (CRSP US Small-Cap Index).


Here’s a snapshot of February market performance (note that returns from the first week of March are excluded from the chart below):

Note: Foreign Stocks: MSCI EAFE International Index; US Small Co: CRSP US Small Cap Index; US Large Co = S&P 500 Index; US Tech Stocks: Nasdaq 100 Index; US Bonds = Bloomberg US Aggregate Bond Index


Tariff Beauty: In the Eye of the Beholder


For some, tariff “is the most beautiful word in the dictionary.” But not for all. Beauty is in the eye of the beholder.


Below is a table that summarizes the tariffs that have been announced so far (courtesy of Apollo) along with their corresponding dates of implementation:

A month ago, President Trump announced that he would impose sweeping tariffs on imports from Canada, Mexico, and China. Soon after, a last-minute deal was reached to delay the Canada and Mexico tariffs for 30 days.


During the first week of March, when the tariffs were scheduled to come into effect, the tariffs on Canada and Mexico were watered down with a 30-day reprieve for automakers.


Also, broader exemptions for other products that are imported from America’s neighbors were permitted after lobbying from business groups that warned of rising prices.


This fluid situation around tariffs may be a feature stemming from the administration’s approach to negotiation, and the back-tracking could be a realization that tariffs likely will cause domestic production and supply disruptions, push up inflation, and weigh on economic growth.


The administration's main economic goals of applying tariffs appear to be to:


  • address unfair trade practices
  • correct significant trade imbalances
  • rebuild the US manufacturing sector


And trade policy that relies on tariffs as a cornerstone is high risk.


In a recent article, JP Morgan Asset Management’s Chief Global Strategist highlighted that tariffs have undesirable consequences including that they:


  • Raise prices
  • Slow economic growth
  • Cut profits
  • Increase unemployment
  • Worsen inequality
  • Diminish productivity
  • Increase global tensions


Economists at major banks and research firms have begun to increase the odds that more tariffs will be implemented (and not just threatened) and applied for longer.


On Monday, March 10 the chief economist at Goldman Sachs published a report that factors in new, more adverse trade policy assumptions. He made a significant downgrade to his US growth forecast for 2025 (by nearly 1 percentage point) – though he still expects the US economy to expand this year.


Moody’s Analytics has estimated that if the US were to impose universal tariffs on all goods entering America, it could slow US economic growth by 3 percentage points by 2026, which likely would push the economy into a recession.


At this point, I don’t believe a tariff-induced US recession is the likely outcome, in part because tariffs still appear to be more malleable than ironclad, but also because the US economy has proven to be resilient and remains on a sound economic footing.


But it’s also quite possible that continued tough talk toward trading partners coupled with policy action that sticks will bite. A meaningful slowdown in the quarters ahead may be in the cards as well as more unsettling moves in stocks.


For all investors, holding to a stress-tested financial plan with an appropriate investment strategy and asset-allocation target is the time-tested way to weather financial market swings, irrespective of what headline or new development is causing the volatility.


And specifically for retirees who depend on portfolio withdrawals, verifying that enough cash is on hand to avoid having to sell stocks if stock volatility persists for an extended period is always good practice.


US Department of Education Shakeup: Effects on Colleges, Universities, and Students

 

Our colleague and college specialist Donna Cournoyer shares her thoughts on the shake up at the Department of Education, and what it means for colleges and universities, and students


When the new administration took office on January 20, they swiftly took action, making big changes to multiple US government agencies, including the US Department of Education.


Layoffs, mass firings, and talk of eliminating some departments altogether. The US Department of Education is at the top of that list.


Linda McMahon was confirmed on March 3 as the next Secretary of Education, seemingly with the sole purpose of dismantling the entire US Department of Education.


As of March 6, President Trump is expected to issue an executive order soon- aimed at abolishing the Education Department, according to people briefed on the matter, as reported in the Wall Street Journal.


However, it appears that eliminating the department would take an act of Congress.

 


Background

 

What is the US Department of Education?

 

  • Federal agency created by Congress in 1979
  • Responsible for overseeing education policies and programs across the country
  • Employs more than 4,000 people
  • Annual budget of $79 Billion
  • Overseen by the US Secretary of Education



Main Roles of the US Department of Education

 

Funding for US Public Schools


While most funding comes from state and local governments, the US Department of Education provides between 6-13% of funding for public schools, according to a 2018 report from the U.S. Government Accountability Office.


  • Title I- Helps serve lower-income communities. In 2023, the Education Department received more than $18 billion for Title I.
  • IDEA (Individuals with Disabilities Education Act)-Provides money to help districts serve students with disabilities. In 2024, the department received more than $15 billion for IDEA. 


Created by separate acts of Congress, Title I was signed into law in 1965, and IDEA signed into law in 1975. It is highly unlikely that these acts would be undone. An act of Congress would be needed, and they have broad bipartisan support.


Tracking of Student Achievement Through the “Nation’s Report Card”


The department oversees the National Assessment of Educational Progress (NAEP) known as the “Nation’s Report Card”.


  • Congress mandated this assessment in 1969, and tests students in reading, math, science and other subjects
  • It also offers insights into attendance, economic conditions, and students' educational backgrounds
  • Educators, policymakers, and researchers use this data to work toward improving the K-12 education across the US


Oversight of Federal Grants and Federal Student Loans for College Students


The department manages federal aid programs, including Pell Grants and student loans which helps students afford higher education.

Key Functions Include:


  • Managing the federal student loan portfolio, approximately $1.6 Trillion, including oversight of outside contracted companies who manage the loans.
  • Managing the FAFSA application (Free Application for Federal Student Aid) which determines eligibility for grants, loans, and work-study programs for college students.
  • More than 17 million current students and new applicants fill out the FAFSA each year.
  • FSA (Federal Student Aid) provides approximately $120.8 billion in grant, work-study, and loan funds each year to help students, and their families, afford a college education.
  • This includes $33 Billion in Pell Grants for low/middle-income undergraduates.
  • The FSA also prevents fraud and abuse by ensuring that schools and borrowers comply with federal regulations to prevent financial mismanagement.


Data Collection on Colleges and Students


Through the IPEDS (Integrated Postsecondary Education Data System), the US Department of Education gathers independent research, statistics, and evaluations of colleges throughout the country. Schools are required to complete detailed reports each year.


  • This information helps students and parents analyze and compare different schools through admissions statistics, academic outcomes, graduation rates, need-based eligibility data and more.


Although the current administration has said it would close the Department of Education, and return “all education, and education work and needs back to the states”, it is already up to the states and local agencies to determine what is taught in classrooms. The department does not dictate what is taught at K-12 schools, colleges, or universities.



Recent Changes to the Department of Education Since the Inauguration and Their Potential Implications


Repayment of Federal Student Loans


If federal student loans face disruptions, students may have to move to private lenders, where they may be subject to higher interest rates and fewer repayment or forgiveness options.


  • The current administration has already paused applications for some income-driven repayment plans.
  • If the administration moves the loan repayment for the $1.6 Trillion in loans to another agency, such as the Treasury department, (an idea that has been discussed) it likely won’t be a quick or smooth process.
  • The future of the loan repayment plans is unsure, students could see a higher monthly payment if the plan options change.
  • One extreme possibility is the privatization of the entire student loan system, which would have wide-spread financial implications for loan holders.


University Research Funding Proposed Cuts


The current administration has proposed significant reductions in federal funding for university research, particularly targeting the National Institutes of Health (NIH).


Legal Actions have been initiated. A federal judge in Massachusetts issued a preliminary injunction blocking the implementation of the 15% cap on indirect cost reimbursements.


Despite the legal challenges, this has created widespread concerns among research institutions.


  • Stanford University has implemented a hiring freeze with leadership citing the potential NIH funding reductions and increased taxes on large private university endowments as factors necessitating these precautionary measures.
  • Rice University anticipates that this will jeopardize critical projects, including innovative cancer treatments and detection technologies. They warn that without sufficient support for indirect costs, they may be faced with difficult choices of raising tuition or halting certain research endeavors altogether.


These proposed cuts pose significant challenges to universities, and could disrupt essential research, lead to job losses, and potentially affect the ability of the US to have a competitive edge in global scientific and technological arenas.


Other broader implications beyond individual universities, is a growing concern that reduced research investment could hinder scientific innovation and diminish the US global leadership in technological advancement.


Some analysts caution that this may allow other nations, notably China, to surpass the US in critical areas like artificial intelligence and quantum computing.


Federal Grants and Loans- Eligibility and Processing at Colleges and Universities


  • The FAFSA has already gone through a horrible two years of a new form rollout. Changing oversight of this form, (a possibility) or eliminating the US Department of Education, would likely result in more upheaval of the application process for college students.
  • Eligibility - It is unsure at this point if any actions will be taken to change how eligibility is calculated by the FAFSA form.

 

Federal Employee Layoffs or the Elimination of the US Department of Education


  • Fewer employees at the department and FSA (Federal Student Aid) could mean a disruption of the processing and flow of Federal aid including federal grants and student loans.
  • Note that federal aid resources such as Pell Grants and student loans are established by Congress, and an executive order to end them is not a legal option and highly unlikely.
  • If the processing of federal aid changes hands, this may cause significant challenges for the timely disbursement and processing of aid for students.



Final Thoughts


As an insider in the Higher Education sphere for much of my career, I understand the processes and intricacies of how colleges and universities operate on many levels.


Colleges are run like a business and rely on tuition to meet budgetary needs. Any federal cuts can have significant impacts.


Endowments (the funds collected from donors) are designed to be a permanent source of income and are used to make long-term investments.


These funds are used to sustain the operating costs and to offer discounts on tuition to attract students to enroll. If other funding is reduced, more strain may be put on endowments going forward.


Financial aid offices coordinate the awarding and disbursement of funds through many systems to get the funds for students disbursed in a timely manner while also complying with federal regulations.


During a time when colleges are still rebounding after the pandemic, and enrollment is on a steady decline due to a shrinking birth rate.


Along with FAFSA application issues, the upheaval caused by the current administration’s decisions, particularly to the US Department of Education, will likely creating more challenges.


Since the Covid-19 pandemic, seventy-four colleges have closed, merged, or announced plans to close, according to bestcolleges.com.


Institutions may become even more unaffordable, research and learning opportunities may be of less quality, and the already complicated, opaque process of applying to college is likely to become even more challenging and complicated.


The biggest immediate effect for students likely will have to do with financial aid.


Many fewer employees at the Department of Education (or of its demise) will make the processing and disbursement of federal funds challenging and likely create disruption and delays. Exactly how this plays out is yet to be seen.


As students and families approach applying to college (and for those in college), it is a good practice to remain informed, particularly as significant changes are likely on the horizon.


Seeking professional advice when possible and keeping calm as you approach college planning will pay off when it comes time to make the four-year commitment, and decide on financing options.


Is This The Real Life? Is This Just Fantasy? - AI Explained


Our colleague and Operations Manager Alex Kania explains how a newer development in Artificial Intelligence, called a Large Language Model, works – and how you can make it work for you


If the first two decades of this millennium were defined by the emergence and increasing relevance of the internet into public life, the next decades may well be defined by the explosion of Artificial Intelligence.


In a few short years, "AI" (Artificial Intelligence) has transformed from sci-fi speculation to a normal part of many people's personal and professional lives.


Advocates see this as a transformative tool which could usher in the greatest technological change since the industrial revolution.


Opponents see it as an existential risk factor comparable to nuclear weapons. Skeptics see it as a glorified party trick bandied about to boost share prices while failing to offer anything truly transformative.


I'll leave it up to you to decide who is right.


In the article that follows, I explain how this current crop of (at times) shockingly human-like services like ChatGPT actually works, and how you might use them.



What do we mean by "AI," anyway?


AI is a fairly loaded term which conjures images of an impossibly intelligent computer system far beyond our fleshy comprehension. The reality is actually much more mundane - "AI" is a fairly nebulous term which simply refers to a machine which can accomplish a task or tasks typically associated with human intelligence.


This leads to a paradox where once a task becomes more associated with machines than human intelligence, it definitionally ceases to be "AI." Searching a database, for example, is now almost exclusively thought of as a job for computers, but was once (not too long ago, so I'm told) performed by humans.


In some sense, then, AI has been here for at least the past 20 years - but this is not what's driving the current "AI Boom."


Rather, concurrent increases in computational power and development of more sophisticated means of representing language has led to the development of tools that can (seemingly) read and write like people, powered by something called a "Large Language Model."



What is a Large Language Model?


A large language model or ("LLM") is a computer program which can process natural language. Models designed to generate output text to respond to the user are considered "generative," since they generate text as a response, although LLMs also encompass models designed for other functions like classifying text or generating code from a text input.


LLMs earn the name “large” because they are trained on *enormous* collections of text – scanning through millions of books, articles, and websites.


By digesting this vast data, an LLM builds up a statistical model of language, which it uses to produce new sentences that sound fluent and human-like. Think of an LLM as a super-advanced version of the autocomplete feature on your phone.


Just as your phone might suggest the next word when you’re texting, an LLM can predict words to continue a sentence in a sensible way.


The difference is that an LLM’s abilities go far beyond a simple text suggestion – it can generate paragraphs of text, answer complex questions, write stories or essays, and even carry on a dialogue.



Modeling Language


A model is a simplified representation of something designed to preserve some key elements.


For example, a model train preserves the appearance of a train while eliminating or changing other attributes (like changing the size and eliminating the ability to carry passengers).


Modeling language in this context means simplifying it to a series of statistical interrelations between symbols with no preservation of the deeper understanding of the world language represents.


In some sense, language is a model of our experience of the world, shaving off qualia but maintaining enough of some underlying essence to allow for communication.


LLMs are effectively working with a model of a model - although this can produce some very human-like results, it's important to remember these systems have no deep understanding of language beyond their encoded statistical maps.


One way to think of it is like a very well-read parrot that has seen every possible context for words – it doesn’t know facts or truth per se, but it knows how words are usually used together. As a result, it can produce remarkably coherent sentences on almost any topic.



How Does an LLM Decide What to Say?


When you ask an LLM a question or give it a prompt, how does it come up with a response? It all boils down to prediction.


The model looks at your input and internally tries to predict a suitable next word, then the next, and so on, one word at a time. Each word is chosen based on probabilities the model has learned.


Essentially, the LLM asks itself: “Given everything I’ve seen (in the prompt and in my training), what word is most likely to come next?”, and it outputs that. Then it repeats the process for the following word, continuing until it produces a complete answer.


Here’s an analogy: imagine the AI is continuing a text that it “thinks” you want to see. If you start a sentence, it will finish it in a way that fits the style and context. The LLM doesn’t have a database of perfect answers; instead, it generates answers on the fly by stringing together likely words.


For example, if the prompt is “The actress that played Rose in the 1997 film Titanic is named…” the model will recognize this as a question about a known fact. It will consider the patterns in its training data and likely predict “Kate” then “Winslet” as the next words, forming the answer "Kate Winslet."


The model arrives at this by having “read” many movie articles and knowing that “Rose…Titanic…is named” is often followed by that name. In essence, the LLM is doing an educated guess based on learned patterns.


It’s important to note that the model isn’t retrieving this answer from a stored fact lookup; it’s generating a response from what it learned.


If your prompt were slightly different or if there were ambiguity, the model’s guess for the next word could be different.


The decision process is statistical: the LLM has a sort of internal compass that was calibrated during training to point to likely continuations.


Because it has so many parameters (like millions or billions of “neurons” adjusting to text patterns), it can capture subtle relations – like understanding that “Rose”, “Titanic”, and “actress” together are likely talking about Kate Winslet. That’s how it determines what to say next.


Another way to think about it: the LLM generates text kind of like how we form sentences when speaking off the cuff.


We don’t plan every word in advance; our brains produce words that make sense as we go.


Similarly, the LLM produces one word at a time in a fluid manner. It doesn’t have a conscious plan or an agenda – it’s just following the direction given by the input and its training.


This is why sometimes the outputs can surprise even the creators of the model: it’s not using a simple script, it’s dynamically weaving a response from learned language patterns.



When AI Gets It Wrong: “Hallucinations” in LLMs


Now that we've established LLM "predict" instead of "think", it may be easier to understand how and why they get things wrong.


LLMs generate text by probability, not by querying a database of verified truths. If the prompt leads into territory the model is uncertain about, it will still produce an answer because that’s what it’s designed to do – generate a continuation.


Unlike a human, LLMs don’t say “I don’t know” unless specifically trained to. They just keeps writing something that sounds right. For users, the key takeaway is: LLMs do not guarantee accuracy. They can embed plausible-sounding falsehoods in their answers.


This is why using an LLM can feel like conversing with a very knowledgeable but sometimes overconfident person who occasionally “bluffs” an answer when unsure.


In practical terms, when you use an LLM (like asking for medical advice, legal information, historical facts, etc.), it’s wise to treat the responses with a healthy dose of skepticism. Use them as a helpful draft or a starting point, but verify critical details from trusted sources.


The technology is improving, and newer models are trying to reduce these hallucinations, but no LLM is 100% reliable on facts.



LLM vs Search Engine: What’s the Difference?


It’s easy to confuse using an LLM with using a search engine like Google, since both can answer questions. However, they work very differently.


Search engines (Google, Bing, etc.) are tools that find and retrieve information. When you search, the engine looks through its indexed web pages for your keywords and returns a list of links to webpages, images, or documents that might contain the answer.


Essentially, a search engine is like a librarian – you ask for information, and it hands you a stack of books or articles (the search results) where you might find what you need. It’s then up to you to read and extract the answer. Traditionally, search engines don’t generate new text; they give you existing content from the web, along with its source.


LLMs (ChatGPT/Bard and similar) are tools that generate content. You ask a question or give a prompt, and the LLM directly produces an answer in natural language. It does not give you a list of sources or direct excerpts unless specifically designed to do so.


Instead, it creates a response on the fly. Using the librarian analogy, an LLM is like a knowledgeable person you ask a question to, and they speak an answer back to you in full sentences, as if they’re explaining or teaching.


The answer is synthesized from what the model “knows” (from its training data), not quoted from a specific webpage.


It’s worth noting that the lines are blurring: modern search engines are integrating AI (Google now often shows AI-generated answers or summaries at the top of search results) to give more direct answers, and many LLM-based services can cite sources or even search the web when generating answers.


But fundamentally, an LLM is not searching the live internet when you ask it something (unless explicitly connected to a search tool). It relies on its pre-existing training data and any provided information to craft a response.


This means LLMs might not have the latest information, whereas a search engine is continuously updated by crawling new web content.



What Can LLMs Help With in Everyday Life?


Summarizing Information: If you have a long article or report and you want just the key points, an LLM can summarize the text for you in a few paragraphs or bullet points. This is like having a speedy reader digest content and present the highlights.

It’s useful for skimming news, research papers, or even simplifying a dense legal document into plain language.


Explaining or Tutoring: LLMs can explain complex topics in simpler terms. Curious about a scientific concept or a piece of history? You can ask an LLM to explain quantum physics as if you’re 5 years old, or to summarize the causes of World War I in a concise way.


Planning and Advice: While they’re not perfect, LLMs can help generate plans or give advice on everyday matters.


For example, trip itineraries (“Plan a 3-day visit to Paris with a focus on art museums”), meal planning (“What’s a healthy dinner idea with chicken and broccoli?”), or personal to-do lists (“Help me create a weekly schedule for my study routine”).


The LLM will generate a structured suggestion that you can then adjust to your needs. They can even play role-based scenarios – like acting as a personal coach giving you motivation tips or a historical figure answering in character, which can be both fun and educational.


Keep in mind though, while LLMs can assist with tasks, they are not perfect and can occasionally produce odd or incorrect results (remember the hallucinations). So for critical tasks, you wouldn’t rely on the LLM alone.


But for every-day, low-stakes tasks, they can save you time and effort by handling the heavy lifting of drafting text or searching through information. In fact, studies have shown that using LLMs can streamline many routine tasks, freeing up time for more important things.



What LLMs Can I Use Today? Do I Have to Pay?


Numerous LLMs are publicly accessible, either directly or through products built on top of them. Some (like ChatGPT, Gemini, Copilot) are directly aimed at end-users and have easy chat interfaces.


Others are more behind-the-scenes but might surface in apps you use. The good news is you don’t need to be a programmer or a tech guru to try them – if you can use a web browser or install an app, you can likely access an LLM.


For the big names, just visit their official websites and they’ll guide you on how to start. (As with any online service, be mindful of official links to avoid scams.)


One great thing about the AI boom is that many LLM services offer free access, at least for basic usage. For instance, ChatGPT has a free version that anyone can use, as does Google’s Gemini and Anthropic's Claude.


You might wonder why they’re free – often it’s because companies are gathering feedback, improving the AI, or integrating it with their services, so they want as many people as possible to use it.


That said, there are usually paid options or subscriptions for heavy users or for accessing more advanced features.


Using ChatGPT as an example: OpenAI offers a subscription called ChatGPT Plus (roughly $20 a month) which gives access to their most advanced models (GPT-4, which is more powerful than the free version’s GPT-3.5), and also provides faster responses and priority access even when demand is high.


But if you’re a casual user, the free ChatGPT service is quite capable on its own for light use and trying things out. The paid tier is more for enthusiasts or professional use where the small improvements in quality and speed matter.



Is Your Personal Information Safe with LLMs?


If by “safe” we mean confidential – you should assume that what you type into an online LLM might be seen by humans running the service or at least by the AI company’s algorithms.


It’s not broadcast publicly (other users won’t see your specific chats), but it’s also not 100% private like talking to a lawyer or doctor under privilege.


A good rule of thumb is not to share anything with an AI chatbot that you wouldn’t share in an email to a stranger.


Keep your inputs fairly generic and non-sensitive. Asking it to draft a generic business plan is fine; asking it to analyze your personal medical records is not a good idea.


On the flip side, companies are aware of these concerns. They do implement security measures: for instance, OpenAI claims conversations are encrypted in transit and at rest, and only authorized personnel can access data.


They also have policies against the AI requesting personal data from users. So, it’s not that using an LLM is dangerous, it’s just that you are your own best gatekeeper – you decide what to divulge.


In summary, personal information is as safe as you make it when using an LLM. The AI itself isn’t malicious or trying to steal info, but the infrastructure around it means your data isn’t totally private.


Treat an AI chat like a semi-public space: enjoy the conversation, get help with tasks, but don’t spill secrets. If you stick to that guideline, using LLMs can be very safe.



More Reading on AI and LLMs


Alex utilized a range of sources when preparing the article on Artificial Intelligence and Large Language Models. You might enjoy diving deeper into the subject, so we’ve included links to the sources below.


Share your thoughts on this Client Letter
Thumbs Up 👍
Meh 😐
Thumbs Down 👎
Best regards,

Rob
If you like this letter, and want to share it, please feel free to forward it to a friend!
Moore Financial Advisors
83 Leonard St, Suite 9
Belmont, MA 02478 
617-393-9999
Contact Us