What we’re optimizing ChatGPT for

OpenAI News
What we’re optimizing ChatGPT for

We build ChatGPT to help you thrive in all the ways you want. To make progress, learn something new, or solve a problem — and then get back to your life. Our goal isn’t to hold your attention, but to help you use it well.

Instead of measuring success by time spent or clicks, we care more about whether you leave the product having done what you came for.

We also pay attention to whether you return daily, weekly, or monthly, because that shows ChatGPT is useful enough to come back to.

Our goals are aligned with yours. If ChatGPT genuinely helps you, you’ll want it to do more for you and decide to subscribe for the long haul.

This is what a helpful ChatGPT experience could look like:

Often, less time in the product is a sign it worked. With new capabilities like ChatGPT Agent, it can now help you achieve goals without being in the app at all—booking a doctor’s appointment, summarizing your inbox, or planning a birthday party.

We don’t always get it right. Earlier this year, an update made the model too agreeable, sometimes saying what sounded nice instead of what was actually helpful. We rolled it back, changed how we use feedback, and are improving how we measure real-world usefulness over the long term, not just whether you liked the answer in the moment.

We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress. To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.

That’s why we’ve been working on the following changes to ChatGPT:

## Learning from experts

We’re working closely with experts to improve how ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.

This work is ongoing, and we’ll share more as it progresses.

Our goal to help you thrive won’t change. Our approach will keep evolving as we learn from real-world use. We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal “yes” is our work.

How we monitor internal coding agents for misalignment Safety Mar 19, 2026

OpenAI Japan announces Japan Teen Safety Blueprint to put teen safety first Safety Mar 17, 2026

Introducing GPT-5.4 mini and nano Company Mar 17, 2026

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

Originally published on OpenAI News.