How to Embrace Design in 2025: UX Trends You Need To Know

A new year is around the corner, and with it comes a wave of new opportunities for organizations to step up their game in digital strategy and product design. Whether it’s delivering unforgettable user experiences, empowering employees with better tools, balancing needs across omnichannel experiences, or ensuring accessibility for everyone, the trends shaping the future of design and UX are all about creating meaningful, impactful solutions.

If you’re looking to design products that truly resonate—whether for your customers, employees, or the world at large—these are five big trends to keep on your radar. Let’s explore how each can make a difference and what you can start doing to stay ahead.

1. Personalizing user experiences with AI

Let’s be honest—personalization is no longer a luxury. People now expect apps, tools, and services to know what they need and deliver it before they even ask. AI is the engine behind all of this magic, enabling products to tailor experiences based on user preferences, behaviors, and goals.

Why it matters in 2025
The evolution of generative AI models, such as ChatGPT and Bard, has transitioned from experimental phases to now practical applications. AI-powered personalization makes users feel understood and valued, which builds trust and loyalty. Whether it’s a business tool that learns a user’s workflows or an app that suggests relevant next steps, personalized experiences are becoming a baseline expectation. In 2025, expect enterprises to integrate these AI-driven personalization tools into large-scale platforms, like CRMs and ERP systems—moving beyond pilot projects into full-scale deployment.


How to get started

  • Be transparent about data: users are more likely to embrace AI-driven personalization if they trust you. Be clear about what data you collect, how you use it, and how you’re keeping it safe.
  • Think multi-channel: users interact across multiple devices and platforms, so make sure your AI systems provide consistent experiences everywhere.
  • Start small, but think big: begin with focused personalization features that add clear value, then expand as your systems and data capabilities grow.

By leaning into AI-driven personalization, you’ll not only meet user expectations but also create a smoother, more intuitive experience that keeps them coming back.

2. Turning data into action with analytics

Data has been a buzzword for years, but in 2025, it’s all about what you do with it. Advanced analytics tools are giving enterprises the power to understand users better, predict behaviors, and make decisions faster. It’s like having a crystal ball for UX.

Why it matters in 2025
Customers and employees expect seamless, problem-free interactions. Advanced analytics can spot issues before they happen, helping you design systems that are proactive, not reactive. Imagine catching a bottleneck in your user journey before anyone complains—that’s a sign of better control over your systems. The introduction of new tools and APIs in late 2024 has made real-time analytics more accessible. This development enables organizations to implement predictive analytics practically, thereby facilitating better proactive decision-making.

In addition, the growing adoption of AI in analytics provides businesses with prescriptive guidance, not just raw data. In 2025, enterprises will leverage AI to receive more actionable recommendations, a capability that was still somewhat emerging in 2024.


How to get started

  • Focus on meaningful metrics: don’t just track everything—identify the user behaviors that matter most and use analytics to monitor and improve them.
  • Bring everyone on board: make analytics insights accessible to all your teams—designers, developers, and decision-makers. This way, everyone can act on the data.
  • Embrace iteration: use analytics to test, learn, and tweak your designs. The best products are always improving.

When you use analytics to guide your decisions, you can create experiences that feel smooth, intuitive, and perfectly in tune with what your users need.

3. Digging deeper with digital ethnography

If you want to design products people truly love, you need to understand not just what they do but why they do it. That’s where digital ethnography comes in. This method lets you study how users interact with your product in real-life situations, giving you insights that surveys and focus groups simply can’t.

Why it matters in 2025
User behaviors are more complex than ever, and traditional research methods often miss the nuances. Digital ethnography lets you see your product through the user’s eyes—literally, if you’re using tools like video diaries or screen recordings. Post-pandemic, remote and hybrid work models have mostly stabilized, allowing researchers to fully embrace digital ethnography. In addition, the development of new mobile and wearable technology in 2024 has enhanced the ability to collect rich, context-aware user data in real time. By 2025, these tools will become widely available, making digital ethnography scalable for enterprises.


How to get started

  • Make it easy for users to share: use tools that let participants capture their experiences naturally, like mobile apps for documenting tasks or workflows.
  • Act on what you learn: turn insights into actions—whether that’s simplifying a confusing workflow or addressing an unmet need.
  • Keep listening: user needs evolve over time, so make digital ethnography an ongoing part of your design process.

By truly understanding your users, you’ll be able to create products that not only meet their needs but feel tailor-made for their lives.

4. Building better tools for employees

Let’s not forget—employees are users, too! They need tools that are as seamless and intuitive as the customer-facing products you create. Unfortunately, enterprise tools often lag behind in user experience, which can lead to frustration and inefficiency. At this point in 2024 there is now a heightened recognition and acknowledgment that applying consumer-grade design principles to employee tools directly impacts productivity, retention, and business outcomes. We expect this investment in employee tools to continue on into 2025.

Why it matters in 2025
In a hybrid work world, employees rely on technology more than ever to stay connected and productive. In 2025, refined solutions will integrate productivity, collaboration, and well-being features into cohesive ecosystems, addressing the evolving needs of the workforce. By investing in tools that prioritize ease of use, collaboration, and well-being, you’ll not only boost productivity but also show your employees they’re valued.


How to get started

  • Ask employees what they need: don’t guess—conduct quant and qual research to understand workflows, pain points, and opportunities for improvement.
  • Blend function with delight: employees are used to slick consumer apps. Bring that same level of polish to your internal tools.
  • Measure and refine: use analytics to track how employees engage with tools and refine the experience to better meet their needs.

Great and easy-to-use tools lead to happier employees—and happier employees create better outcomes for your customers and business.

5. Designing for accessibility and inclusion

Accessibility isn’t just about ticking a box—it’s about making sure everyone can use and enjoy your product. From people with disabilities to those in different cultural or linguistic contexts, designing for inclusivity creates better experiences for all users. Designers have been voicing the importance of this for years and now we are at the point where it is becoming more of the norm inside design processes to consider a wider variety of user types. 

Why it matters in 2025
Accessibility is no longer optional. Updates to the Americans with Disabilities Act (ADA) and European accessibility regulations in late 2024 have certainly brought accessibility to the forefront. It’s a legal requirement in many places, but beyond that, it’s simply good business. When you design with inclusivity in mind, you expand your audience, build goodwill, and create more equitable experiences.


How to get started

  • Test early and often: don’t wait until the end of the design process. Test for accessibility at every stage to catch and fix issues before they become major problems.
  • Think beyond compliance: standards like WCAG are a starting point, but true accessibility means creating delightful, intuitive experiences for all users.
  • Educate your teams: make sure everyone involved in product development understands the principles of accessible and inclusive design.

Furthermore, AI-driven accessibility solutions have become more viable, enabling companies to detect and address accessibility issues in real time during product development—a capability that was not reliably available in 2024. Accessibility isn’t just the right thing to do—it’s an opportunity to innovate and create products that work better for everyone.

Looking ahead to 2025

These trends—AI-driven personalization, advanced analytics, digital ethnography, employee experience tools, and greater focus on accessibility—are reshaping the way organizations think about user experience. By embracing these ideas now, you’ll be ready to build smarter, more inclusive products that delight your users and drive your business forward.

At Grand Studio, we’re here to help you navigate these trends and design solutions that make a difference. Let’s make 2025 the year you transform the way your users experience your products.

From Ideas to Market: Designing Successful Products

For those starting a new service or digital product, the path to a successful launch and adoption can, at times, seem daunting or opaque—particularly if it’s the first venture of its kind for a company, be they a start-up or an enterprise org.

When Grand Studio takes on defining a new product or service, we take a similar holistic approach to what an internal product management team might take, including: product definition and strategy, establishing a brand and design system, and roadmapping prioritized features. Here’s a peek at how our approach can help any organization when they are faced with building something new.

To begin: use what you have

One of our long-term collaborations this year has been creating a new digital suite in the complex cybersecurity space. To move quickly and effectively, we and our client partners have been leveraging existing processes they developed from their consulting experience within cybersecurity. 

This practice, of digging into successful existing processes to further define digital tools or automations, is one we use with many of our clients and remains a key way to successfully bring a new tool to market. It’s important to know where the team is starting and where the gap is to properly define a new product. It can not only help the design and development teams to know what they’re building when an existing set of processes is documented, but it can also be supremely helpful in constructing an effective business plan for the new product or service by helping define the market, its size, its needs, etc.

Collaborate closely with partners

We make sure to work closely with our clients every step of the way as we’re designing everything from scratch. Regular meetings throughout the week are important times to define, align on goals, and finalize design direction. This includes leadership, developers, and subject matter experts being involved throughout the design process so that all areas of the business have visibility and proper input into design decisions. Not only does this create a stronger product overall but also develops mutual ownership in the design process and decisions made throughout. 

But even when final designs have been handed off, that doesn’t mean collaboration should end.  Being committed to building the right thing means being close to the actual development of the product and continuing to make design updates due to changes in direction, development issues, or any other discrepancies. Keeping an open channel of communication and being transparent about the process helps build that bridge while adhering to a timeline.

Start a new design system from scratch

Establishing a design system from the very beginning helps serve as a guide to tackle the growing complexity in any new initiative.  A design system is when a team defines common design patterns that can scale and document those elements (buttons, colors, interactions, etc.) properly to re-adapt in design and maintain for developers as they build out screens that have been designed. Thinking about that cybersecurity tool we mentioned, we put the collaboration ethos to good use and empowered our client partners to help determine which sections of the tool would be best to design first in terms of establishing a system. 

Another useful strategy here is to map out key screens and list all the user needs and features to determine where to start. Naturally, the system won’t be fully planned as things will come up throughout design—that cybersecurity tool has grown and evolved since the first iteration which has only improved the overall product. But having an agreed-upon and (hopefully) user-data-driven list to work from, helps keep everyone moving quickly and efficiently towards launch.

Find ways to adapt to the ambiguity

At Grand Studio, we thrive when working through ambiguity. In fact, it’s one of our core tenets. When you have an initiative similar to our referenced cybersecurity tool, you may be presented with challenges like diving deep into a complex system, working with evolving business priorities inside your organization, and supporting developers as they work off design specs. Working through these challenges has required us at Grand Studio to be nimble enough to jump from one thing to the next while prioritizing our time to be most effective. This requires coordination within our team and the client to determine where design is needed to achieve the ultimate goal of building an initial release. There’s a continuous conversation between business, tech, and design around priority of tasks and the tradeoffs we inherit from decisions. Keeping ourselves adaptable amongst the whole cross-functional team is absolutely necessary to ensure the right features get launched. 

All in all the challenge of creating something new is exciting. After a few months designing the cybersecurity tool, we’ve gone from high-level definitions to a full design system that informs a suite of tools for users to manage cybersecurity threats on devices. Though there’s plenty of work to be done from now until the initial release we’ll keep pushing consistency in design, collaborating across functions, and working through ambiguity to create a great product.


Need help with designing the next big thing?

Grand Studio can help. We use a tailored approach to work with you to define and fully learn your problem space so that we ensure we are solving the right problems through proper design methodologies.

The Ideal GenAI Design Process

This is the second in a multi-part series about Generative AI, focused on how to set up your Generative AI project for success. Whether you’re new to GenAI, or have your own tactics to share, there’s more we can all learn about implementing this new technology.

Hopefully you’ve already read our Checklist for GenAI Readiness and you are following our best practices: from managing expectations, to cleaning up your data, to building in time for testing and embracing the whimsy of an exciting emerging technology. Let’s move on to the next step and talk about tips that can properly define an ideal design and project process for your GenAI product.

Here at Grand Studio, we tend to follow the classic double diamond design thinking process, defined by the British Design Council in 2005. We use this to guide our clients through a design process that focuses not just on making sure we’re designing something that works and is reliable, but something that people trust and are excited to use.  

Let’s dive into some of the major phases that can define an ideal GenAI design process.

Phase 1: Definition + Discovery

Does a GenAI solution effectively solve user problems?

We’ve said it before, and we’ll say it again: While GenAI is a useful, exciting technology, it isn’t guaranteed to be the answer to every problem under the sun. Here are some examples of what it can do now:

  • Automate repetitive, tedious tasks such as writing boilerplate emails, or scheduling meetings
  • Create outlines, write basic copy, and provide inspiration and ideas into the process
  • Create customized images – some of which can even be generated from any of your existing illustrations

And here are some examples of what GenAI is not quite suited for just yet:

  • Replacing human decision-making processes, or making suggestions that require long-term, contextual strategic thinking
  • Fact-check itself against hallucinations or bias due to the technology’s limitations in understanding the “truth” 
  • High degrees of nuance and problem-solving. Although many LLMs have passed crucial benchmarks set by different data sets and exams that we use to test people (such as the MKAT, SATs, Bar Exams), they’re not really capable of fully autonomous thinking (yet…).

As you and your team consider the possible applications for GenAI in your work, make sure you’re not replacing the difficult, nuanced, creative work that humans are uniquely capable of doing with a technology that is not yet well-suited for it. You will end up with unhappy employees, unhappy customers, and certainly unhappy technology.

Phase 2: UX/UI Design

What should the user interface and experience for your GenAI product look and feel like?

Now that you have identified and selected GenAI as the right technology for this project, it’s time to figure out how your users might interact with it. Understanding how GenAI will integrate into existing – or brand new – products is an emerging field of UX/UI design. We’re seeing everything from AI-enabled UI design engines that create UI from a user’s text or sketch input, to straightforward chatbots, to DJ players built on text inputs, to agentic co-pilots.

There are so many creative ways that users can interact with GenAI, and part of this initial building phase is to take the learnings from the Definition + Discovery Phase to understand what kinds of interaction will make the most sense to your users, solve their problems, and delight them. You and your team should ensure that the interface follows basic UX/UI heuristics, is user friendly, and intuitive. Don’t be afraid to use existing patterns, brand guidelines, voice, tone, and personality from familiar applications and products – but think carefully about what is unique about the interaction between your user and the AI and how it can become an extension of your brand that delights users. We’ve found that it’s best to implement incrementally and integrate into existing systems, as opposed to making something brand new. Even though this is an exciting, emerging space for UX/UI design, the same foundational principles still apply!

Phase 3: Beta Test

How do users interact with and understand the product, and what is required to build trust in the technology? 

    For anyone who has introduced a new tool or technology to users, you know that people don’t always react or understand the technology in the same ways, or even ways that you could have anticipated! Add the current range of attitudes regarding GenAI – and the complexity of emotions people have when using it – and this effort starts to invite risk upon launch.

    Fortunately, to mitigate all of those risks we would recommend conducting a round of thorough beta testing to understand how your users will understand, interact with, and trust (or mistrust) your GenAI product. The goal is to learn more about your product and your users at the same time. Here are some things to monitor as you run a Beta test:

    • Prompting: What are the prompts people are using to interact with the GenAI? Do they understand how to edit or adjust their prompts when they get an unexpected, or unhelpful response?

      Consider: Think about creating a basic prompt library of pre-written prompts according to common use cases. These can help empower users and ease their comfort levels using GenAI. Another plus is that the product will work more reliably! 
    • Benchmarks: Speaking of reliability, it’s crucial that everyone is on the same page about how success for this product will be measured. Beta testing is a perfect opportunity to start seeing how well the product solves the problems you’ve designed it to solve.

      Consider: You can set some pre-established benchmarks by checking your model against existing industry model benchmarks and comparing how your model measures up. Keep in mind that these benchmarks will change if you are using a customized model, and that the models themselves are constantly changing and improving.
    • Change Management: People have lots of feelings about GenAI, and you can’t necessarily blame them. GenAI is constantly in the news as either the answer to the world’s problems, or signaling the end of people’s jobs. You’ll want to be sensitive to this when launching your product, and beta testing is a good opportunity to ask your users how they feel.

      Consider: Plan to add interviews or an open-ended survey to your beta test to surface what level of experiences people already have with GenAI, what they think about it, how much (or little) they trust the technology and its outputs, along with anything else that you think will make a difference in your adoption strategy once you reach product rollout.

    Phase 4: Refine + Deploy

    How does the product work in the real world? 

      The final phase of your GenAI product is getting everything you’ve made, tested, and refined out into the real world. You will want to make sure you’ve incorporated all the research findings you’ve uncovered along the way.

      Make sure you’ve built in methods of continuous improvement and measurement. The last thing you want is any ill confidence around your product’s ability to function correctly or – even worse – causing problems somewhere else. One thing we’ve done in the past at Grand Studio is build in short surveys at the end of GenAI-enabled chatbots that ask for feedback specifically about how the GenAI performed. We can then use that feedback to refine our model and measure reliability and user trust. 

      Finally, ensure you have your best testing-informed change management plan in place once the product is deployed. In our experience, complex matrixed organizations can face a lot of tool fatigue (when new internal products and tools are constantly being rolled out so fast that people aren’t sure of what they do, or how to use them). An effective change management process can help guide people toward trust and understanding once the tool is out there.


      Need help with the design process for your GenAI project?

      Grand Studio can help. We use a tailored approach to work with you to define and fully understand your problem space and how GenAI can be best utilized to solve problems for end users. Be sure to stay tuned to the final part of the series: Advocating for the Human in a GenAI World.  

      Successful Multi-Agency Collaboration

      Hiring two agencies on the same project? Here’s what it takes to make it work.

      It’s not hard to imagine why organizations can be wary of hiring two (or more) outside parties to work together on one project. Will the managerial strain be out of hand? Will they communicate as well as they need to in order to get the job done? Will you have to spend hours a week putting out fires and resolving disputes?

      Fearful of the coordination that comes with hiring more than one consultancy or agency, many organizations opt for full-service agencies that can manage all aspects of the work. While that may be the absolute right call for some projects, other efforts will really benefit from combining the unique specialized skill sets of different organizations — because of course, no one organization can be an expert in all things.

      If you think your work may benefit from specialists that span multiple agencies, we’re here to tell you that it doesn’t have to be quite so scary. As a consultancy that’s collaborated with other agencies many times before, we’re big believers in the value of diverse skill sets for solving complex problems. And here’s what you can do to set your organization up for multi-agency success.

      Hiring for group-work amongst agencies

      Finding a consultancy who can do great work alongside another consultancy is partly common sense. Naturally, you’ll want to identify parties that are strong communicators — which also means they are good at listening. It also helps to suss out egotism… does the agency seem more focused on doing good work, or impressing you? If you sense a group that appears very concerned about the optics of their work, that could lead to a jockeying-for-position or credit-taking game that turns other consultancies off and puts them on the defensive. 

      Another thing to look for is complementary styles of problem solving among agencies. That doesn’t necessarily mean they should do things the exact same way — diversity in problem-solving techniques will usually enrich a project. But will there be enough common language to discuss ideas and reckon with differences in process? We’ve had many successful collaborations with agencies who operated very differently from ours, but when we each saw each other’s approach as a strength we could learn from, the divergence was a net positive. 

      And of course, it’s always good to straight-up ask agencies whether or not they’ve worked alongside other agencies in the past. How did that go? How did it shape their point of view on what makes for successful collaborations? You may even be able to talk to parties they’ve worked alongside in the past.

      Setting multiple agencies up for success (without creating a managerial nightmare)

      When it comes to making sure the groups you’ve hired will be as successful as possible, it all comes down to delivering as much up-front clarity as possible. All agencies should be crystal clear on why they have been hired, and the value they are expected to be contributing. This should also be clear amongst parties — each agency should understand their own contribution in relation to the contributions of parties around them. Some overlap in responsibility is completely fine, as long as this overlap is named and explained. And this doling out of responsibilities and expectations should come from the person hiring, so as to keep things as clear and undeniable as possible.

      It’s also extremely helpful to do some situation-planning ahead of time, discussing things like how decisions will be made and how disagreements should be resolved. We all know any complex project is bound to experience change and surprise, but having expectations around how those will be handled can ease tension and help each party be their most effective and collaborative self.

      Once you get the agencies going, don’t be afraid to leave them to it. While your initial presence is key, eventually, consultants need to develop their own rapport with one another and ease into a rhythm. Their relationship should become their responsibility in a way that does not need to be mediated by you. 

      We don’t deny the benefits of full-service agencies — there are times when ease of operation indeed outweighs a need for specialists. But managing multiple agencies doesn’t have to be a headache. It’s how we’ve done some of our best work. 

      Scoping a project that may benefit from our collaboration? We’d love to hear from you!

      Get in touch

      A Checklist for GenAI Readiness

      This is the first in a multi-part series about Generative AI, focused on how to set up your Generative AI project for success. Whether you’re new to GenAI, or have your own tactics to share, there’s more we can all learn about implementing this new technology.

      With the many offerings available now in the GenAI landscape, from OpenAI’s DALL-E and ChatGPT – already at a 4o version—to Meta’s LLaMA, to Microsoft’s Orca, to Google’s multiple AI offerings, Generative AI Large Language Models (GenAI LLM) now feels a bit inescapable. It can be easy to get caught up in the excitement about adding a GenAI LLM-enabled tool to your company’s portfolio, but it can be difficult to know where to start, and what needs to be in place to succeed. So before we discuss the various offerings or how to implement LLMs, let’s take a look at how you can set your team up for success—whether you’re in Engineering, Product or Design—before embarking on your next GenAI LLM project.

      1. Know what you can change with LLMs – and consider how you can change the rest 
        This is a question we think about all the time at Grand Studio: what problems can – and should – be solved with a given technology? With technologies as complex as LLMs that involve trillions of tokens, years of training, and millions of dollars, designing a new LLM might be a bit out of reach for many. But even for those who can access these solutions, it still doesn’t mean that all aspects of their problems should be solved with a GenAI modality. That’s why exploring what the problems are, how users behave and what tools they use, as well as what combination of solutions may most holistically address the issue(s) is an important first step. And if a GenAI LLM is in fact the right solution,  there may still be quite a few elements of a problem  that can be solved for and improved outside of a GenAI. 

        One recent example came up as we were designing a GenAI LLM solution: one of the use cases we wanted to tackle had to sit outside the solution’s access point due to security measures and therefore could not be addressed by the GenAI. We were able to do a UX/UI heuristic pass and create a set of digital UX adjustments that reduced the issues with that use case so much that the amount of money the enterprise was spending dropped an entire contract tier.  So don’t underestimate the impact of UX/UI within a holistic solution.
      1. Clean up your data
        We’ve said it before and we’ll say it again: your GenAI will flourish or fail depending on how clean and organized your data set is. The general data sets that inform current LLMs are massive and in order to get answers that are relevant and accurate for your company, or even your industry, you’ll likely need to help the LLM focus in some way. Data lakes – essentially centralized areas for your data that an LLM can be required to check first before generating answers, and carefully crafted back-end directions on what information to present – and how – (called system prompts) can help your LLM prioritize certain data before going into its general knowledge. The trick is that data has to be organized, well-written, and clean of errors first. This can be a big ask if you are the kind of company that has a huge knowledge base archive that maybe hasn’t been overhauled in years.

        One way to tackle this is to start small(er). You won’t be able to get away with only 100 clean pieces of data, but you might be able to get away with ~ 1000. Starting small and establishing a content governance structure can help you out in the long run, as knowledge becomes more relevant and up to date, both for your new GenAI buddy and for the employees in the business itself. (And if content governance is new to you, that’s something that consultancies like Grand Studio can help with.)
      1. Testing, testing, testing
        GenAI is a new – and therefore unpredictable – technology. People have a lot of mixed feelings about GenAI; some people are excited about what they see as a tool of the future, while others are skeptical or even afraid of what GenAI will mean for their job security and place in the workforce. Building multiple moments of user-centered research and  testing into your project plan can help you build empathy with your target audience, with an added benefit of not only spotting technical bugs and glitches, but also helping people start to build trust and understanding of what this technology is capable of. Thorough research with the right users can also help your internal comms or external product marketing teams create a finely-tuned product launch messaging and rollout plan. (As it happens, Grand Studio is so committed to user-centering all products and services that we’ve created a public-facing framework to help put this into action).
      1. Embrace the whimsy
        Finally, as you’re gearing up to get started on your exciting new GenAI-enabled product, it’s important to set some grounded expectations and cut through the marketing hype. GenAI, and LLMs in particular, are not silver bullets. They are emerging technologies that are still being experimented on, developed, and tested out every day. There are limited functionalities as to what these LLMs are capable of; they’re not truly “intelligent” and they can’t read your – or your users’ – minds. And there’s still a learning curve to understanding how to get the best out of these technologies.

        Bias and hallucinations are real risks that could open you and your company up to potential liability depending on your target audience and industry. Company security is an additional concern given that data once fed into an LLM – even a company’s proprietary LLM or Wrapper – is impossible to remove once it’s in there so there will need to be additional protections in place. Having these hard conversations about why your solution should include a GenAI-enabled product and what the expectations of this technology are before you get started will save everyone a lot of time and pain later on as these limitations make themselves known.

      Overall, GenAI is an exciting thing that has a whole world of potential and possibilities attached to it. We believe that being honest about the technology’s limitations and setting yourself up for success as best as possible will give you the greatest chance to make the best use of this emerging technology and its capabilities.

      Stay tuned for the next part of this series: The Ideal GenAI Design Process

      4 Things You Won’t BELIEVE Design Can Learn From Buzzfeed

      July: a time for pools, slushies, bike-riding and hanging out with friends. What better way to celebrate mid-summer than to look for inspiration in one of the quintessential lighthearted media outlets?

      Without further ado, here’s what design – at all levels – can learn from the Buzzfeed approach.

      1. Bite-sized content works.  People read listicles and short articles because they are brief snippets they can parse quickly and move on. Often in design, we try to pack too much in, and it gets lost in the process. Bullet points of quick takeaways, illustrative impact quotes or screens, and executive summaries work really well – with an offer to dive deeper for those who genuinely want more.
      1. Nothing engages like gossip. Put another, more design-y way, stories anchor everything. We all want that tea spilled and frankly, when details are grounded in a narrative that starts with a bang and sets the stage, tension that builds, and an ending that wraps up that portion of the story (even if the overarching narrative will continue on), we’re listening the whole way through. Along those lines…
      1. Juicy headlines draw people in. Is it clickbait or is it cutting through the noise to grab your audience’s attention? (Both?) We can do the same in design when communicating important research insights with leadership or naming design options with stakeholders managing busy schedules. Marketing exists for a reason and oftentimes Design doesn’t do a good job of utilizing it for ourselves. Juicy headlines or naming conventions can help our business stakeholders understand what problem is being solved, or what they or their users will get out of a particular solution from the get-go and bring them along in a productive, collaborative way. 
      1. Embrace the whimsy. Buzzfeed always has a silly quiz on things like “what your favorite sandwich says about your future” – and people love those. Sometimes design takes on the personality of business and the thing is, we really can’t take ourselves too seriously for two reasons:
        1. We need to take the work seriously but take ourselves lightly in order to really enable creativity to flow. Putting on formal structured thinking and expression can feel quite confining to many designers. Which leads to… 
        2. We’re the “creatives.” (Yes, everyone is creative but we’re the people who are expected to bring the outside-the-box thinking and artifacts). We’re not only allowed but expected to bring some amount of rule-breaking and whimsy to the table. 

      Put another way, if not us then who? ESPECIALLY within your own teams. So have fun. Do a little something silly. Have a team-building activity that’s a little weird (we’ve done Secret Santa lunches sent to each other’s houses and at-home Nailed It challenges). Change your Teams photo to a raccoon meme. Use gifs in communication.

      When you embrace the silly you make space for other people to relax, bring themselves and create a more creative and innovative space for work to take place. A place where they can take risks – at first with just themselves but then with the products and ways of working. And smart risk is how you get to great. 

      Want help figuring out how to set up and maintain a high-functioning and impactful design team? Drop us a line! 

      Scaling Research by Activating the Frontline

      Innovation is the name of the game in UX Research; we are often being asked to find creative ways of gathering insights from end users with smaller teams and even smaller timelines and budgets for recruitment. As we continue to seek out ways of reaching people, there’s an often untapped source of research insights who are working with our target users day in and day out: frontline employees. This is a key strategy, particularly when dealing with any protected or vulnerable population, such as patients or children, who are often very difficult to access for a variety of (very good!) reasons.  

      Frontline employees are the boots-on-the-ground people who are interfacing with users every day. Depending on the industry and problem space they might be receptionists, call center employees, nurses, cashiers, etc. They spend their time putting out fires and hearing directly from customers about what’s working and what isn’t. 

      So, where do I start? 

      First things first, activating any group to be part of research often starts with building relationships. Frontline employees are busy people who are usually being managed by busy people who are often concerned about preserving their teams’ bandwidth and protecting their time. To reach them you’ll need allies, and allies start with relationships. Start by getting to know their managers and team leaders (or whatever the equivalent role is). SME (Subject Matter Expert) interviews can be a great method here, both to learn more about pain points and also to help people understand that you’re there to help them and their teams with their jobs. Research is a way of letting people be heard, and that’s a valuable thing you can do for them.

      Once you’ve built a relationship with the managers and team leads, you can start asking about getting access to their teams who are interfacing directly with your target audience. 

      I’ve built relationships…now what? 

      Now that you’ve gotten access to the frontline employees, there are a couple different research methods we suggest considering. This is your chance to get the inside scoop about what kinds of pain points exist for users and employees, what kinds of tools they use, what kinds of ideas or suggestions they have for improvement, and more. Keep in mind that while research can be hugely impactful, if you’re not careful it can also be very time-consuming and extractive – meaning it takes knowledge, expertise, energy, etc from people without giving anything of value back. So consider how much bandwidth, time, and energy people have when planning your research, as well as what you may be able to give back to them. 

      Two non-extractive options we’ve leveraged in the past are: 

      Diary studies 

      Diary studies are an unmoderated research method that asks someone to keep a log about their experience at certain times or in response to certain triggers such as after speaking to a customer or using a piece of software. You can ask people to take photos of key moments, record their emotions or activities during or after certain events, or provide reflections on changes they might have made or ideas they have. Diary studies are a great way to turn your frontline employees into researchers themselves by having them think about and interrogate their own workflows, softwares, and scripts when interacting with end users. 

      Diary studies can be very impactful because they are straight from the participant’s unfiltered perspective and are designed to happen in the moment, so they are less likely to be misremembered. Some drawbacks include people forgetting to fill them out at the right times – or at all, especially if they are busy – or providing unclear information that is difficult to follow up on and get additional clarity. 

      Passive prompt wall

      This is a good method to use if your participants all share a physical space – such as an office  or breakroom. Setting up an installation such as oversized post-it papers with markers and prompts that participants can fill out on their off time can provide you with first-hand insights about how people are feeling, what they’re hearing from users, and what ideas they might have for how to improve the products or services they deal with day-to-day. 

      Some watchouts to this method is that you need to be mindful of how you word your prompts so they are easy to understand and you’re surfacing relevant information. There’s always a risk of people responding with unserious, off-topic responses with unmoderated forum-type research, so have a plan in place to vet some of the more suspicious answers you receive (possibly from those SMEs you interviewed earlier).  

      I’ve gathered my research, what do I do now? 

      Congratulations on gathering research from frontline employees! Now it’s up to you to synthesize your insights and pull out the necessary takeaways. Consider conducting 1:1 interviews or focus groups to follow up on interesting themes and patterns. If you are developing concepts or prototypes out of your insights, frontline employees can be a great group of people to start gathering some validation on your ideas. 

      Research is an ever-evolving practice, and finding new ways to learn about what works and what doesn’t can sometimes feel like a moving target. But if you build relationships early and expand your participants to include not just those experiencing the pain points first hand, but to include the people who are experiencing them second-hand as well, you can capture more data in richer and more informed detail than ever before.

      Interested in how you can activate your frontline employees? Drop us a line!

      Unsolicited Advice for Leveraging a GenAI LLM

      At this point, you’re probably pretty familiar with the AI hype out there. You’ve likely read that GenAI (like DALL-E or ChatGPT) is great for generating both visual and text-based content, and AI overall can be good for identifying patterns, particularly in large data sets, and providing recommendations (to a certain degree).

      But you may also be familiar with the myriad ways GenAI has gone sideways in recent months (ex: Intuit’s AI tax guidance debacle, New York City’s law-breaking chatbot, the Air Canada lawsuit, and so many more). That doesn’t mean you need to stop experimenting with it, of course. But it does mean that the folks warning about it not being ready quite yet have some valid points worth listening to. 

      Having built several AI solutions, including a recent GenAI LLM (large language model) solution, here’s some unsolicited advice to consider when leveraging a GenAI LLM. 

      Don’t use GenAI for situations where you need a defined answer.


      As evidenced in all the examples above, GenAI chatbots will – and often do – make information up. (These are called hallucinations within the industry, and it’s a big obstacle facing LLM creators.) The thing is, this is a feature, not a bug. Creating unique, natural-sounding sentences is precisely what this technology is intended to do and fighting against it is – at least with the current technology – pointless. 

      There are some technical guardrails that can be set up (like pointing the system to first pull from specific piles of data, and crafting some back-end prompts to tell it not to make things up) yet still, eventually, our bot friends will find their way to inventing an answer that sounds reasonable but is not, in fact, accurate. That is what they are meant to do. 

      In situations where you need defined, reliable pathways, you’re better off creating a hardcoded (read: not GenAI) conversation pathway that allows for more freeform conversation from the user while responding with precise information. (For the technically-minded, we took a hybrid format of GenAI + NLU for our latest automation and found it quite useful for ensuring that something like following a company-specific process for resetting a password was accurate and efficient – and importantly, in that use case, also more secure.)

      Know thy data—and ensure it’s right.


      I know it’s been said a million times over but a pile of inaccurate, poorly-written data will provide inaccurate, poorly-written responses. GenAI cannot magically update your data to be clean and accurate – it can, over time, generate new information based on existing information and its style (which should still be checked for accuracy) but asking it to provide correct information when it’s hunting for the answer through incorrect information is an impossible task. It cannot decipher what is “right” or “wrong” – only what it gets trained to understand is right and wrong. 

      It’s important then to know what the data that you’re starting with looks like and do your best to ensure it’s quality data – accurate, standardized, understandable, etc. Because barring time to properly train the data (which is a serious time commitment but well worth it for anyone wanting proprietary or custom answers), starting with a clean data set is your best bet. 

      Bring the experts in early.


      When people have been experimenting with the technology and potential solution for a while, there is a pressure to “get it done already” by the time the experts roll in that doesn’t allow for the necessary exploration and guardrail-setting that needs to happen, particularly in an enterprise setting where there are plenty of Legal, Compliance, Security and even Marketing hurdles to clear. 

      From both personal and collected experience, it’s worth noting that often the initial in-house experimentation focuses on the technical aspects without user experience considerations, or even why GenAI might – or might not – be the right solution here.  That’s going to take a little time. So it’s worth bringing in design and/or research experts, whether in-house or consultants, alongside the initial technical exploration to do some UX discovery and help the entire sussing-out process happen in tandem with the technical exploration. This can provide a clear picture of the business case for pursuing this particular solution. 

      To help out, the Grand Studio team created a free, human-centered AI framework for an ideal AI design & implementation process.

      Interested in knowing how to start a GenAI project of your own? Drop us a line! 

      Stretching Lean Budgets Strategically

      Every business hits times when the budget gets tighter — it’s an inevitable part of being in it for the long haul. For a lot of industries, their short-term futures are a bit unpredictable right now, leading to questions about how to best set up their business to weather any twists and turns. 

      In the face of uncertainty, many organizations scale back as quickly as possible to alleviate the pressure on their overhead. While understandable, rushed decisions can sometimes be short-sighted decisions, making it harder for those businesses to rebuild once lean times have passed. 

      Just as strategy is important in times of growth, it’s also key in times of reduction. Whether you’re the one facilitating trims or absorbing them as best you can, read on for our take on putting strategy into leaner times. 

      Center existing customers

      While you can’t completely lose sight of expansion, the math is simple — it’s much less expensive to retain an existing customer than it is to acquire a new one. In moments when efficiency with available budgets is essential, the best move is often to invest the majority of your efforts in customer retention through the products, services, and/or tech systems your teams may already be running. This means maintenance, yes, but it also means uncovering new ways to provide benefits for them, ensuring they will return to you. Growth is important, and should not be forgotten, but it’s important to balance such endeavors with true investment in preserving what’s working for you today. 

      Step back and learn, and go “lightweight”

      We’ve seen huge payoffs for organizations that take budget setbacks as opportunities to zoom out on their business and take a closer look at their products and services. What makes the most sense to focus on in this new climate? Where is infrastructure/development urgently needed, and where can it wait? Which projects are going to best prepare the company for when the market forges ahead? In all likelihood, a change affecting your business also means changes for the partners and clients around you. How might these circumstances affect your short- and long-term success strategies? 

      In lean times, it’s also very important to get to the learnings quickly so you can pivot if needed. Consider stepping back to ask what the scrappier, more agile version of your process might look like. You want to be investing efforts in the right places, so getting that feedback loop on a quicker cycle is key.  

      Consider how projects are shelved

      When an organization tightens the belt, it’s almost certain that internal priorities will need to shift. This often involves shelving longer-term projects, and refocusing resources to work on lower-hanging fruit that will generate income in the short term.

      Once the worst of the budget drought has passed, though, most organizations will want to pick up where they left off on those shelved projects. The problem is that many times, the  employees with the institutional knowledge to restart those projects have been shuffled around in a reorg, laid off, or have left the company out of fear for the business’s future. Countless times, we’ve seen work either need to get redone because there was not enough context to pick it back up again — or, get restarted from scratch only to realize midway through that much of what they’ve worked on had already been done.

      While it may not be realistic to avoid any kind of turnover or layoffs, consider using the lower-budget times to thoroughly document any mid-flight work that needs temporary shelving. This includes the work done to date, by whom, what was learned and the impact moving forward, and what still needs to be learned or done. Taking the time to do this in “quieter” times is hugely important to not wasting effort when your business is finally in recovery and expansion mode. 

      Judicious use of outside help 

      It’s hard to justify spending any money when your budget is limited. That said, given the overall fear of making the wrong decision that can pervade stressful times, it can be helpful to call on outside eyes for perspective and strategic support. Things like day-long prioritization workshops, short research sprints, or new tech trainings can be sensible ways to spend less money but still get a lot of impact and keep initiatives moving forward.

      Another smart way to use outside support in tighter times is as short-term personnel augmentation. When you can’t commit to retaining FTEs for each role you need, hiring an agency can be a smart way to access a wide array of skill sets for less money.

      Plan like the storm will pass — with the right strategy, you can help make sure it does. And if you’re looking for a partner in weathering that storm, we’d love to hear from you.

      Leveraging AI in User Research

      Grand Studio has a long history of working with various AI technologies and tools (including a chatbot for the underbanked and using AI to help scale the quick-service restaurant industry). We’ve created our own Human-Centered AI Framework to guide our work and our clients to design a future that is AI-powered and human-led and that builds on human knowledge and skills to make organizations run better and unlock greater capabilities for people. When ChatGPT hit the scene, we started experimenting right away with how it could improve our processes and make our work both more efficient and more robust. 

      Given our experience with what AI is good at doing (and what it’s not), we knew we could use ChatGPT to help us distill and synthesize a large amount of qualitative data in a recent large-scale discovery and ideation project for a global client. 

      Here are some takeaways for teams hoping to do something similar: 

      1. Don’t skip the clean-up. As they say: garbage in, garbage out. Generative AI (GenAI) tools can only make sense of what you give them – they can’t necessarily decipher acronyms, shorthand, typos, or other research input errors. Spend the time to clean up your data and your algorithmic synthesis buddy will thank you. This can also include standardized formats, so if you think you may want to go this route, consider how you can standardize note-taking in your upfront research prep.

      2. Protect your – and your client’s – data. While ChatGPT doesn’t currently claim any ownership or copyright over the information you put in, it will train on your data unless you make a specific privacy request . If you’re working with sensitive or private company data, do your due diligence and make sure you’ve cleaned up important or easily identifiable data first. Data safety should always be your top priority.

      3. Be specific with what you need to know. ChatGPT can only do so much. If you don’t know what your research goals are, ChatGPT isn’t going to be a silver bullet that uncovers the secrets of your data for you. In our experience, it works best with specific prompts that give it clear guidelines and output parameters. For example, you can ask something like: 

      “Please synthesize the following data and create three takeaways that surface what users thought of these ideas in plain language. Use only the data set provided to create your answers. Highlight the most important things users thought regarding what they liked and didn’t like, and why. Please return your response as a bulleted list, with one bullet for each key takeaway, with sub-bullets underneath those for what they liked and didn’t like, and why.” 

      Doing the upfront human-researcher work of creating high quality research plans will help you focus on the important questions at this stage.

      4. It’s true, ChatGPT gets tired. As with any new technology, ChatGPT is always changing. That being said,  the 4.0 version of ChatGP that we worked with demonstrated diminishing returns the longer we used it. Even though the prompts were exactly the same from question to question, with the input of fresh data sources each time, ChatGPT’s answers got shorter and less complete. Prompts asking for three synthesized takeaways would be answered with one or two, with fewer and fewer connections to the data sets. By the end, its answers were straight up wrong. Leading us to our final takeaway:

      5. Always do an audit of the answers! Large language models like ChatGPT aren’t able to discern if the answers they provide are accurate or what you were hoping to receive. It’s also incredibly confident when providing its answers, even if they’re wrong. This means you can’t blindly rely on it to give you an accurate answer. You have to go back and sift through the original data and make sure that the answers it gives you line up with what you, the researcher, also see. Unfortunately this means the process will take longer than you were probably hoping for, but the alternative is incomplete, or incorrect answers – which defeat the purpose of synthesis in the first place and could cause the client to lose trust in you. 

      Outcome: Did using ChatGPT speed up our synthesis significantly? Absolutely. Could we fully rely on ChatGPT’s synthesis output without any sort of audit or gut check? Not at all. We’ll keep experimenting with ways to incorporate emerging technologies like Generative AI into our workstreams, but always with research integrity and humans at our center. 

      Interested in how GenAI might work for your organization? Drop us a line – we’d love to chat!