Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Thu, 02 May 2024 02:06:11 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2024, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[Longing For May (2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/04/desktop-wallpaper-calendars-may-2024/ https://smashingmagazine.com/2024/04/desktop-wallpaper-calendars-may-2024/ Tue, 30 Apr 2024 13:00:00 GMT Inspiration lies everywhere, and as a matter of fact, we discovered one of the best ways to spark new ideas: desktop wallpapers. Since more than 13 years already, we challenge you, our dear readers, to put your creative skills to the test and create wallpaper calendars for our monthly wallpapers posts. No matter if you’re into illustration, lettering, or photography, the wallpapers series is the perfect opportunity to get your ideas flowing and create a small artwork to share with people all around the world. Of course, it wasn’t any different this month.

In this post, you’ll find desktop wallpapers created by artists and designers who took on the creativity challenge. They come in versions with and without a calendar for May 2024 and can be downloaded for free. As a little bonus goodie, we also compiled a selection of favorites from our wallpapers archives at the end of the post. Maybe you’ll spot one of your almost-forgotten favorites from the past in here, too? A big thank-you to everyone who shared their designs with us this month! Happy May!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

A Symphony Of Dedication On Labour Day

“On Labour Day, we celebrate the hard-working individuals who contribute to the growth of our communities. Whether in busy urban areas or peaceful rural settings, this day recognizes the unsung heroes driving our advancement. Let us pay tribute to the workers, craftsmen, and visionaries shaping our shared tomorrow.” — Designed by PopArt Studio from Serbia.

Navigating The Amazon

“We are in May, the spring month par excellence, and we celebrate it in the Amazon jungle.” — Designed by Veronica Valenzuela Jimenez from Spain.

Popping Into Spring

“Spring has sprung, and what better metaphor than toast popping up and out of a fun-colored toaster!” — Designed by Stephanie Klemick from Emmaus Pennsylvania, USA.

Duck

Designed by Madeline Scott from the United States.

Cruising Into Spring

“When I think of spring, I think of finally being able to drive with the windows down and enjoying the fresh air!” — Designed by Vanessa Mancuso from the United States.

Lava Is In The Air

Designed by Ricardo Gimenes from Sweden.

Love Myself

Designed by Design-Studio from India.

Bat Traffic

Designed by Ricardo Gimenes from Sweden.

Springtime Sips

“May is a month where the weather starts to warm and reminds us summer is approaching, so I created a bright cocktail-themed wallpaper since sipping cocktails in the sun is a popular warm weather activity!” — Designed by Hannah Coates from Baltimore, MD.

Hello May

“The longing for warmth, flowers in bloom, and new beginnings is finally over as we welcome the month of May. From celebrating nature on the days of turtles and birds to marking the days of our favorite wine and macarons, the historical celebrations of the International Workers’ Day, Cinco de Mayo, and Victory Day, to the unforgettable ‘May the Fourth be with you’. May is a time of celebration — so make every May day count!” — Designed by PopArt Studio from Serbia.

ARRR2-D2

Designed by Ricardo Gimenes from Sweden.

May Your May Be Magnificent

“May should be as bright and colorful as this calendar! That’s why our designers chose these juicy colors.” — Designed by MasterBundles from Ukraine.

The Monolith

Designed by Ricardo Gimenes from Sweden.

Blooming May

“In spring, especially in May, we all want bright colors and lightness, which was not there in winter.” — Designed by MasterBundles from Ukraine.

The Mushroom Band

“My daughter asked me to draw a band of mushrooms. Here it is!” — Designed by Vlad Gerasimov from Georgia.

Poppies Paradise

Designed by Nathalie Ouederni from France.

Lake Deck

“I wanted to make a big painterly vista with some mountains and a deck and such.” — Designed by Mike Healy from Australia.

Make A Wish

Designed by Julia Versinina from Chicago, USA.

Enjoy May!

“Springtime, especially Maytime, is my favorite time of the year. And I like popsicles — so it’s obvious isn’t it?” — Designed by Steffen Weiß from Germany.

Celestial Longitude Of 45°

“Lixia is the 7th solar term according to the traditional East Asian calendars, which divide a year into 24 solar terms. It signifies the beginning of summer in East Asian cultures. Usually begins around May 5 and ends around May 21.” — Designed by Hong, Zi-Cing from Taiwan.

Stone Dahlias

Designed by Rachel Hines from the United States.

Understand Yourself

“Sunsets in May are the best way to understand who you are and where you are heading. Let’s think more!” — Designed by Igor Izhik from Canada.

Sweet Lily Of The Valley

“The ‘lily of the valley’ came earlier this year. In France, we celebrate the month of May with this plant.” — Designed by Philippe Brouard from France.

Today, Yesterday, Or Tomorrow

Designed by Alma Hoffmann from the United States.

Add Color To Your Life!

“This month is dedicated to flowers, to join us and brighten our days giving a little more color to our daily life.” — Designed by Verónica Valenzuela from Spain.

The Green Bear

Designed by Pedro Rolo from Portugal.

Lookout At Sea

“I wanted to create something fun and happy for the month of May. It’s a simple concept, but May is typically the time to adventure out into the world and enjoy the best of Spring.” — Designed by Alexander Jubinski from the United States.

Tentacles

Designed by Julie Lapointe from Canada.

Spring Gracefulness

“We don’t usually count the breaths we take, but observing nature in May, we can’t count our breaths being taken away.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Geo

Designed by Amanda Focht from the United States.

Blast Off!

“Calling all space cadets, it’s time to celebrate National Astronaut Day! Today we honor the fearless explorers who venture beyond our planet and boldly go where no one has gone before.” — Designed by PopArt Studio from Serbia.

Colorful

Designed by <a href=https://www.lotum.de>Lotum from Germany.

<a href=https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/e8daeb22-0fff-4b2a-b51a-2a6202c6e26e/may-12-colorful-31-full.png>

Who Is Your Mother?

“Someone who wakes up early in the morning, cooks you healthy and tasty meals, does your dishes, washes your clothes, sends you off to school, sits by your side and cuddles you when you are down with fever and cold, and hugs you when you have lost all hopes to cheer you up. Have you ever asked your mother to promise you never to leave you? No. We never did that because we are never insecure and our relationship with our mothers is never uncertain. We have sketched out this beautiful design to cherish the awesomeness of motherhood. Wishing all a happy Mothers Day!” — Designed by Acodez IT Solutions from India.

Asparagus Say Hi!

“In my part of the world, May marks the start of seasonal produce, starting with asparagus. I know spring is finally here and summer is around the corner when locally-grown asparagus shows up at the grocery store.” — Designed by Elaine Chen from Toronto, Canada.

May The Force Be With You

“Yoda is my favorite Star Wars character and ‘may’ has funny double meaning.” — Designed by Antun Hirsman from Croatia.

Birds Of May

“Inspired by a little-known ‘holiday’ on May 4th known as ‘Bird Day’. It is the first holiday in the United States celebrating birds. Hurray for birds!” — Designed by Clarity Creative Group from Orlando, FL.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Lessons Learned After Selling My Startup]]> https://smashingmagazine.com/2024/04/lessons-learned-after-selling-startup/ https://smashingmagazine.com/2024/04/lessons-learned-after-selling-startup/ Mon, 29 Apr 2024 15:00:00 GMT August 2021 marks a milestone for me. That’s when we signed an acquisition agreement to sell Chatra, a profitable live chat platform. I co-founded it after shutting down my first startup after a six-year struggle. Chatra took me and the team six years to finish — that’s six years of learning, experimenting, sometimes failing, and ultimately winning big.

Acquisitions happen all the time. But what does it look like to go through one, putting the thing you built and nurtured up for sale and ceding control to someone else to take over? Sometimes, these things are complicated and contain clauses about what you can and can’t say after the transaction is completed.

So, I’ve curated a handful of the most valuable takeaways from starting, growing, and selling the company. It took me some time to process everything; some lessons were learned immediately, while others took time to sink in. Ultimately, though, it’s a recollection of my personal journey. I hope sharing it can help you in the event you ever find yourself in a similar pair of shoes.

Keeping The Band Together

Rewind six years before the Chatra acquisition. My first startup, Getwear, ran out of steam, and I — along with everyone else — was ready to jump ship.

But we weren’t ready to part ways. My co-founder-partner was a close childhood friend with whom I would sell pirated CDs in the late 90s. Now, I don’t think it’s the most honest way to make a living, but it didn’t bother us much in high school. It also contributed to a strong bond between us, one that led to the launch of Getwear and, later, Chatra.

That partnership and collaboration were too precious to let go; we knew that our work wasn’t supposed to end at Getwear and that we’d have at least one more try together. The fact that we struggled together before is what allowed us to pull through difficult times later. Our friendship allowed us to work through stress, difficulties, and the unavoidable disagreements that always come up.

That was a big lesson for me: It’s good to have a partner you trust along for the ride. We were together before Chatra, and we saw it all the way through to the end. I can’t imagine how things would have been different had I partnered with someone new and unfamiliar, or worse, on my own.

Building Business Foundations

We believed Getwear would make us millionaires. So when it failed, that motivation effectively evaporated. We were no longer inspired to take on ambitious plans, but we still had enough steam to start a digital analog of a döner kebab shop — a simple, sought-after tech product just to pay our bills.

This business wasn’t to be built on the back of investment capital; no, it was bootstrapped. That means we made do with a small, independent, fully-remote team. Remember, this is in 2015. The global pandemic had yet to happen, and a fully remote team was still a novelty. And it was quite a change from how we ran Getwear, which was stocked with an R&D department, a production office, and even a factory in Mumbai. A small distributed team seemed the right approach to keep us nimble as we set about defining our path forward as a company.

Finding our purpose required us to look at the intersection of what the market needs and what we know and can do well. Building a customer support product was an obvious choice: at Getwear, we heavily relied on live chat to help users take their body measurements and place their orders.

We were familiar with existing products on the market. Besides, we already had experience building a conversational support product: we had built an internal tool to facilitate communication between our Mumbai-based factory and an overseas customer-facing team. The best thing about that was that it was built on a relatively obscure framework offering real-time messaging out of the box.

There were maybe 20 established competitors in the space back in 2015, but that didn’t dissuade us. If there was enough room for 20 products to do business, there must be enough for 21. I assumed we should treat competition as a market validation rather than an obstacle.

Looking back, I can confidently say that it’s totally possible to compete (and succeed) in a crowded market.

Product-wise, Getwear was very innovative; no one had ever built an online jeans customizer as powerful as ours. We designed the UX from scratch without relying much on the best practices.

With Chatra, we went down a completely different route: We had improved the established live chat product category via features that were, at that time, commonly found in other types of software but hadn’t made their way to our field. That was the opportunity we seized.

The existing live chat platforms felt archaic in that the interfaces were clunky and reminiscent of Windows 95, the user flows were poorly thought out, and the dated user experience resulted in lost conversation histories.

Slack was a new product at this time and was all the rage with its fresh approach to user interfaces and conversational onboarding. Products like Facebook Messenger and Telegram (which is popular in Eastern Europe and the Middle East) were already standard bearers and formed user expectations for how a messaging experience should work on mobile. We learned a lot from these products and found in them the blueprint to design a modern chat widget and dashboard for agents.

We certainly stood on the shoulders of giants, and there’s nothing wrong with stealing like an artist: in fact, both Steve Jobs and Bill Gates did it.

The takeaway?

A product does not have to be new to redefine and disrupt a market. It’s possible to lead by introducing modern standards and designs rather than coming up with something radically different.

Making A Go-To-Market Strategy

Once we were clear about what we were building and how to build it, the time came to figure out a strategy for bringing our product to market.

Two things were very clear and true to us up front:

  1. We needed to launch and start earning immediately — in months rather than years — being a bootstrapped company and all.
  2. We didn’t have money for things like paid acquisition, brand awareness, or outbound sales representatives to serve as the front line for customer engagement.

Both conclusions, taken together, helped us decide to focus our efforts on small businesses that need fewer features in a product and onboard by self-service. Marketing-wise, that meant we’d need to find a way around prohibitively expensive ads.

Enter growth hacking! The term doesn’t resonate now the way it did in 2015: fresh, aggressive, and effective. As a user-facing website widget, we had a built-in acquisition channel by way of a “powered by Chatra” link. For it to be an effective marketing tool, we had to accumulate a certain number of customers. Otherwise, who’s going to see the link in the first place?

We combined unorthodox techniques to acquire new customers, like web-scraping and email address discovery with cold outreach.

Initially, we decided to go after our competitors’ customers. But the only thing we got out of targeting them with emails was their rightful anger.

In fact, a number of customers complained directly to the competitors, and the CEO of a prominent live chat company demanded we cease communicating with their users.

More than that, he actually requested that we donate to a well-known civil liberty NGO, something we wholeheartedly agreed to, considering it was indeed the right thing to do.

So, we decided to forget about competition and target potential customers (who owned e-commerce websites) using automation for lead research, email sending, and reply processing. We managed to do it on a massive scale with very few resources. By and large, cold outreach has been the single most effective marketing tool we have ever used. And contrary to common assumption, it is not a practice reserved purely for enterprise products.

Once we acquired a significant user mass, the widget link became our Number One acquisition channel. In lean startup terminology, a viral engine of growth is a situation when existing customers start generating leads and filling the marketing funnel for you. It’s where we all want to be, but the way to get there is often murky and unreliable. But my experience tells me that it is possible and can be planned.

For this strategy to work, it has to be based on natural user interactions. With widgets, the mechanic is quite apparent, but not so much with other products. Still, you can do well with serious planning and running experiments to help make informed decisions that achieve the best possible results.

For example, we were surprised that the widget link performed way better in tests when we changed it from “Powered by Chatra” to “Get Chatra!”. We’re talking big increases with minor tweaks. The small details really do matter!

Content marketing was another avenue we explored for generating leads. We had already done the cold outreach and had a good viral engine going with the widget link. Content marketing, in contrast, was an attempt to generate leads at the “top” of the funnel, independent of any outbound marketing or our customers’ websites. We produced books and guides that were well-researched, written, and designed to bring in potential customers while supporting existing ones with resources to get the most out of Chatra.

Sadly, these efforts failed to attract many new leads. I don’t want to say not to invest in quality content; it’s just that this is not a viable short-term growth strategy.

Increasing Lifetime Customer Value

It took six months of development to launch and another year to finally break even. By then, we had achieved a product-market fit with consistent organic growth; it was time to focus on metrics and unit economics. Our challenge was to limit customer churn and find ways to increase the lifetime value of existing customers.

If there’s an arch-enemy to SaaS, it’s churn. Mitigating churn is crucial to any subscription business, as longer subscriptions generate more revenue. Plus, it’s easier to prevent churn than it is to acquire new customers.

We found it helpful to distinguish between avoidable churn and unavoidable (i.e., “natural”) churn. The latter concerns customer behavior beyond our control: if an e-commerce store shuts down, they won’t pay for services. And we had nothing to do with them shutting down — it’s just the reality of life that most small businesses fail. No quick-fix strategy could ever change that; we just had to deal with it.

Chatra’s subscription pricing was fairly inexpensive, yet we enjoyed a relatively high customer lifetime value (cLTV). Many customers tended to stay for a long time — some, for years. Our high cLTV helped us justify higher customer acquisition costs (CAC) for paid ads in the Shopify app store once we decided to run them. Running the ads allowed us to improve our Shopify app store search position. And because of that, we improved and kept our position as a top app within our category. That, I believe, was one of the factors that the company Brevo considered when they later decided to acquire our business.

We tried improving the free-to-paid subscription conversion rate by targeting those who actively used the product but remained on a free plan for an extended period. We offered them an upgraded plan subscription for just one dollar per year. And to our surprise, that failed to convince many people to upgrade. We were forced to conclude that there are two types of customers: those who pay and those who do not (and will not).

From that point forward, things got even weirder. For example, we ran several experiments with subscription pricing and found that we could increase subscription prices from $11 per seat to $19 without adversely affecting either the visitor-to-user or the free-to-paid conversion rates! Apparently, price doesn’t matter as much as you might think. It’s possible to raise prices without adversely affecting conversions, at least in our experience with a freemium pricing model.

We also released additional products we could cross-sell to existing customers. One was Livebar, an app for in-browser notifications on recent online shopping purchases. Another was Yeps, a simple announcement bar that sticks to the top of a webpage. Product-wise, both were good. But despite our efforts to bring awareness to them in all our communications with Chatra customers, they never really took off. We’ve closed the first and sold the second for a price that barely justified the development and ongoing support we were putting into it. We were wrong to assume that if we have a loyal audience, we could automatically sell them another product.

Contemplating An Exit

Chatra was a lean company. As a SaaS business, we had a perfect cost-revenue ratio and gained new customers mainly through viral dynamics and self-onboarding. These didn’t increase our costs much but did indeed bring in extra subscription dollars. The engine worked almost without any effort on our side.

After a few years, the company could mostly function on auto-pilot, giving us — the founders — time and resources to pay our bills and run business experiments. We were enjoying a good life. Our work was a success!

We gave up on an exit strategy even before starting, so we didn’t pay much attention to the acquisition offers we routinely received; most weren’t enticing enough to pull us away. Even those sent by people known in the industry were way too small: the best offer we got was a valuation of 2.5 times our Annual Recurring Revenue (ARR), which was a non-starter for us.

Then, we received an email with another offer. The details were slim, but we decided to at least entertain the idea and schedule a time to chat. I replied that we wouldn’t consider anything lower than an industry-standard venture-backed SaaS valuation (which was about eight times ARR at the time). The response, surprisingly, read: “Let’s talk. Are you ready to sign a non-disclosure agreement?”

My biggest concern was that transferring ownership might lead to the Chatra team being laid off and the product termination. I didn’t want to let down our existing customers! The buyer understood the situation and assured us that Chatra would remain a separate line of business, at least for an extended period. No one on the team would lose their job. The buyer also planned to fork Chatra rather than close it, at least initially.

Still, letting go of it was difficult, and at times, I even felt the urge to blow up the negotiations.

So, why sell at all? We did it for three reasons:

  • First, we felt stuck in the mature stage of the business lifecycle and missed the feeling of creating new things.
  • Second, we (rightfully) knew that the good times could not last forever; we would be wise to avoid putting all our eggs in one basket.
  • Third was a bit of pride. I genuinely wanted to go through the acquisition process, which has always seemed like a rite of passage for entrepreneurs.

Chatra was growing, cash-flow positive, and economic tailwinds seemed to blow our way. On the flip side, however, we had little left to do as founders. We didn’t want to go upmarket and compete with massive players like Intercom and Drift. We were happy in our niche, but it didn’t offer enough growth or expansion opportunities. We felt near the end of the line.

Looking back, I see how fortunate we were. The market took a huge hit soon after the acquisition, to the extent that I’m sure we would not have been able to fetch equally enticing offers within the next two years.

I want to stress that the offer we got was very, very generous. Still I often kick myself for not asking for more, as a deep-pocketed buyer is unlikely to turn away simply because we were trying to increase the company’s valuation. The additional ask would have been negligible to the buyer, but it could have been very meaningful for us.

Different acquisitions wind up looking different in the end. If you’re curious what a transaction looks like, ours was split into three payouts:

  1. An initial, fixed payment on the closing date;
  2. Several flexible payouts based on reaching post-acquisition milestones;
  3. An escrow amount deposited with an escrow agent for the possibility of something going wrong, like legal claims.

We assumed this structure was non-negotiable and didn’t try to agree on a different distribution that would move more money to the initial payment. Why? We were too shy to ask and were sure we’d complete all requirements on time. Accepting a significant payment delay essentially credited the buyer for the amount of the payouts while leaving me and my co-founder vulnerable to uncertainty.

We should’ve been bold and negotiated more favorable terms. After all, it represented the last time we’d have to battle for Chatra. I consider that a lesson learned for next time.

Conclusion

Parting ways with Chatra wasn’t easy. The team became my second family, and every product pixel and bit of code was dear to my heart. And yes, I do still feel nostalgia for it from time to time. But I certainly enjoy the freedom that comes with the financial gains.

One thing I absolutely want to mention before closing this out is that

Having an “exit” under my belt actually did very little to change my personal well-being or sense of self-worth. The biggest lesson I took away from the acquisition is that success is the process of doing things, not the point you can arrive at.

I don’t yet know where the journey will take me from here, but I’m confident that there will be both a business challenge and a way of helping others on their own founder journey. That said, I sincerely hope that my experience gives you a good deal of insight into the process of selling a company. It’s one of those things that often happens behind closed doors. But by shedding a little light on it — at least this one reflection — perhaps you will be more prepared than I was and know what to look for.

]]>
hello@smashingmagazine.com (Yaakov Karda)
<![CDATA[The End Of The Free Tier]]> https://smashingmagazine.com/2024/04/end-of-free-tier/ https://smashingmagazine.com/2024/04/end-of-free-tier/ Fri, 26 Apr 2024 08:00:00 GMT I love free tiers, and I am not the only one. Everyone loves free things — they’re the best thing in life, after all. But maybe we have grown too accustomed to them, to the extent that a service switching from a “freemium” model to a fully paid plan would probably feel outrageous to you. Nowadays, though, the transition from free to paid services seems inevitable. It’s a matter of when a service drops its free tier rather than if it will.

Companies need to make money. As developers, we probably understand the most that a product comes with costs; there are startup funds, resources, and salaries spent to maintain and support the product against a competitive globalized market.

If I decided to take something I made and ship it to others, you darn well know I would charge money for it, and I assume you’re the same. At the same time, I’m typically more than happy to pay for something, knowing it supports the people who made it.

We get that, and we surely don’t go walk into a grocery store complaining that nothing they have is free. It’s just how things work.

What exactly, then, is so infuriating about a service offering a free tier and later deciding to transition to a priced one?

It’s Positioning, Not Money

It’s not so much about the money as it is the positioning. Who wouldn’t feel somewhat scammed, having invested time and resources into something that was initially advertised as “free” only to be blindsided behind a paywall?

Most of the time, the feeling is less anger than it is mildly annoying. For example, if your favorite browser suddenly became a paid premium offering, you would most likely switch to the next best option. But what happens when the free tier for a hosted product or service is retired? Switching isn’t as easy when hundreds of thousands of developers server their projects in a free-tier hosting plan.

The practice of offering a free tier only to remove it seems like a common practice on the web that won’t go away any time soon. It’s as though companies ditch them once (1) the product becomes mature enough to be a feature-rich offering or (2) the company realizes free customers are not converting into paid customers.

It has been a source of endless complaints, and one only needs to look back at PlanetScale’s recent decision to remove its free-tier database plan, which we will get deeper into in a bit. Are free tiers removed because of their unsustainable nature, or is it to appease profit-hungry companies? I want to explore the why and how of free tiers, better approaches for marketing “free” services, and how to smoothly retire a free tier when it inevitably goes away.

Glossary

Before we wade further into these waters, I think it’s worth having a baseline understanding of pricing concepts that are relevant to the discussion.

A free tier is one of several flavors:

  • Free trial opt-in
    Permits users to try out the product for a limited period without providing payment details. Once the trial ends, so does access to the product features.
  • Free trial opt-out
    Requires users to provide payment information during registration en route to a free trial that, once it ends, automatically converts to a paid account.
  • Freemium model
    Offers access to a product’s “core” features but requires upgrading to a paid account to unlock other features and benefits.
  • Reverse trial model
    Users start with access to the premium tier upon registration and then transition to a freemium tier after the trial period ends.
Case Study: PlanetScale

Let’s start this conversation by looking at PlanetScale and how it killed its free tier at the beginning of the year. Founded in 2018, PlanetScale launched its database as a service in 2021 and has raised $105 million in venture capital and seed funding, becoming one of the fastest-growing tech companies in North America by 2023. In March of this year, CEO Sam Lambert announced the removal of PlanetScale’s hobby tier.

In short, the decision was made to provide “a reliable and sustainable platform for our customers” by not “giving away endless amounts of free resources to keep growing,” which, of course, leaves everyone in the freemium tier until April 8 to either pay for one of the next plans at the outrageous starting price of $39 per month or migrate to another platform.

Again, a company needs steady revenue and a reliable business plan to stay afloat. But PlanetScale gave mixed signals when they stated in the bespoke memo that “[e]very unprofitable company has a date in the future where it could disappear.” Then they went on to say they are “the main database for companies totaling more than $50B in market cap,” and they “have been recognized [...] as one of the fastest growing tech companies in the US.”

In non-bureaucratic speak, PlanetScale says that the product is failing from one side of its mouth and that the company is wildly successful from the other.

The company is doing great. In November 2023, PlanetScale was ranked as the 188th fastest-growing company in North America by Deloitte Technology Fast 500™. Growth doesn’t necessarily equal revenue, but “to be eligible for Technology Fast 500 recognition, [...] [c]ompanies must have base-year operating revenues of at least US $50,000, and current-year operating revenues of at least US $5 million.”

PlanetScale’s decision can only be interpreted as “we want more money,” at least to me. There’s nothing about its current performance that suggests it needs the revenue to keep the company alive.

That’s a punch below the waist for the developer community, especially considering that those on the free tier are likely independent bootstrappers who need to keep their costs low. And let’s not overlook that ending the free tier was accompanied by a round of layoffs at the company.

PlanetScale’s story is not what worries me; it’s that retiring freemium plans is becoming standard practice, as we have seen with the likes of other big PaaS players, including Heroku and Railway.

That said, the PlanetScale case is perhaps the most frustrating because the cheapest alternative to the free tier they now offer is a whopping $39 per month. Compare that to the likes of others in that space, such as Heroku ($7 per month) and Railway ($5 per month).

Is This How A Free Tier Works?

With zero adoption, the value of a new service can’t be seen behind a paywall. Launching any kind of product or service with a freemium pricing model is often used to bring awareness to the product and entice early adopters who might convert into paying customers to help offset the costs of those on the free plan. It’s the old Pareto, or 80/20, rule, where 20% of paying customers ought to pay for the 80% of free users.

A conversion rate is the percentage of users that upgrade from a free tier to a paid one, and an “average” rate depends on the type of free tier or trial being offered.

In a freemium model — without sales assist — a good conversion rate is somewhere between 3–5%, but that’s optimistic. Conversion rates are often way lower in reality and perhaps the toughest to improve for startups with few or no customers. Early on, startups often have so few paying customers that they will have to operate at a loss until figuring out a way to land paying customers who can subsidize the ones who aren’t paying anything.

The longer a company operates at a loss, the more likely it races to generate the highest possible growth before undoubtedly having to cut benefits for free users.

A lot of those free users will feel misled and migrate to another service, but once the audience is big enough, a company can afford to lose free customers in favor of the minority that will switch to premium. Take Evernote, for example. The note-taking app allowed free users to save 100,000 notes and 250 notebooks only to do an about-face in 2023 and limit free users to 50 notes and one notebook.

In principle, a free tier serves the same purpose for SaaS (Software as a System) and PaaS (Product as a System) offerings, but the effects differ. For one, cloud computing costs lots of money, so offering an AWS wrapper in a free tier is significantly harder to sustain. The real difference between SaaS and PaaS, however, is clear when the company decides to kill off its free tier.

Let’s take Zoom as a SaaS example: there is a basic tier that gives you up to 40 minutes of free meeting time, and that is plenty for people who simply don’t need much beyond that. If Zoom were to remove its free tier, free users would most likely move to other freemium alternatives like Google Meet rather than upgrade to one of Zoom’s paid tiers. Those customers have invested nothing in Zoom that locks them in, so the cost of switching to another meeting app is only the learning curve of what app they switch to.

This is in contrast to a PaaS; if the free tier is removed, switching providers introduces costs since a part of your architecture lives in the provider’s free tier. Besides the effort needed to migrate to another provider, moving data and servers can be an expensive operation, thanks to data egress fees. Data egress fees are obscure charges that cloud providers make customers pay for moving data from one service to another. They charge you to stop paying!

Thankfully, there is an increased awareness of this issue through the European Union’s Data Act that requires cloud providers located in Europe to remove barriers that prevent customers from easily switching between companies, including the removal of artificial egress fees.

The Ethics Of The Free Tier

Is it the developer’s fault for hosting a project on a free pricing tier, considering that it can be rolled out at any moment? I have two schools of thought on this: principle and consequential.

  • Principle
    On the one hand, you shouldn’t have to expect a company to pull the rug out from under you by removing a free tier, especially if the company aims to be a reliable and sustainable platform.
  • Consequential
    On the other hand, you don’t expect someone to cut a red light and hit you when you are driving, but you still look at both sides of the street. So it is with using a free tier. Even if it is “immoral” for a company to remove the tier, a developer ought to have a backup plan in the event that it happens, especially as the disappearance of free tiers becomes more prevalent in the industry.

I think it boils down to a matter of transparency. No free tier is advertised as something that may disappear, even if it will in the future. In this case, a free tier is supposed to be another tier with fewer benefits than the paid plan offerings but just as reliable as the most expensive plan, so no user should expect to migrate their projects to other providers any time soon.

What’s The Alternative?

Offering customers a free tier only to remove it once the company gets a “healthy enough” share of the market is just wrong, particularly if it was never attached to an up-front sunset date.

Pretending that the purpose of a free tier is the same as a free trial is unjust since it surely isn’t advertised that way.

If a company wants to give people a taste of how a product or service works, then I think there are far better and more sincere alternatives to the free-tier pricing model:

  • Free trials (opt-in)
    Strapi is an open-source CMS and a perfect example of a service offering a free trial. In 2023, the company released a cloud provider to host Strapi CMS with zero configuration. Even though I think Strapi Cloud is on the pricey side, I still appreciate having a 14-day free trial over a free tier that can or maybe will be removed later. The free trial gives users enough time to get a feel for the product, and there’s no credit card required that would lock someone in (because, let’s face it, some companies count on you forgetting to cancel your free subscription before payments kick in).

  • Free credits
    I have used Railway to host Node.js + Postgres in the past. I think that its “free tier” is the best example of how to help customers try the service: the cheapest plan is a relatively affordable $5 per month, and a new subscriber is credited with $5 to start the project and evaluate the service, again, without the requirement of handing over credit card information or pulling any rugs out from under people. Want to continue your service after the free credits are exhausted? Buy more credits!

Railway is a particular case because it used to have a free tier, but it was withdrawn on June 2, 2023. However, the company removed it with a level of care and concern for customers that PlanetScale lacked and even gave customers who relied on the free tier a trial account with a number of free credits. It is also important to note (and I can’t get over it) that PlanetScale’s new cheapest plan is $39 per month, while Railway was able to limit the damage to $5 per month.

Free Tiers That I Use

I don’t want this article to be just a listicle of free services but rather the start of a conversation about the “free-tier dilemma”. I also want to share some of the free tiers I use, even for small but production-ready projects.

Supabase

You can make pretty much any imaginable web app using Supabase as the back-end since it brings a PostgreSQL database, authentication, real-time subscriptions, and storage in a central dashboard — complete with a generous allocation of database usage in its free tier.

Railway

I have been using Railway to host Strapi CMS for a long time. Aside from its beautiful UI, Railway includes seamless deployment workflows, automatic scaling, built-in CI/CD pipelines, and integration with popular frameworks and databases thanks to its hundreds of templates. It doesn’t include a free tier per se, but you can get the full feel of Railway with the $5 credit they offer.

GitHub Pages

I use GitHub Pages the way I know many of you do as well: for static pages and technical demos. I have used it before to make live examples for my blog posts. So, it’s more of a playground that I use to make a few artifacts when I need to deploy something fast, but I don’t rely on it for anything that would be of consequence if it were to suddenly go away.

Netlify

Beyond hosting, Netlify offers support for almost all modern frameworks, not to mention that they toss in lots of additional perks, including solid documentation, continuous deployment, templates, an edge network, and analytics — all of which are available in a free tier that pleases almost anyone’s needs.

Conclusion

If it isn’t totally clear where I fall on the free pricing tier situation, I’m not advocating that we end the practice, but for more transparency on the side of the companies that offer free tier plans and increased awareness on the side of developers like myself.

I believe that the only way it makes sense to offer a free tier for a SaaS/PaaS is for the company providing it to view it as part of the core product, one that cannot be sunset without a clear and transparent exit strategy, clearly communicated up-front during any sort of registration process. Have a plan for users to painlessly switch services. Allow the customer to make an informed choice and accept responsibility from there.

Free tiers should attract users rather than trap them, and there is an abysmal difference between replacing a free tier for $5 per month with one that costs nearly $40. Taking away the service is one thing; charging exorbitant rates on top of it only adds insult to injury.

We can do better here, and there are plenty of alternatives to free tiers for effectively marketing a product.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[Conducting Accessibility Research In An Inaccessible Ecosystem]]> https://smashingmagazine.com/2024/04/conducting-accessibility-research-inaccessible-ecosystem/ https://smashingmagazine.com/2024/04/conducting-accessibility-research-inaccessible-ecosystem/ Thu, 25 Apr 2024 12:00:00 GMT Ensuring technology is accessible and inclusive relies heavily on receiving feedback directly from disabled users. You cannot rely solely on checklists, guidelines, and good-faith guesses to get things right. This is often hindered, however, by a lack of accessible prototypes available to use during testing.

Rather than wait for the digital landscape to change, researchers should leverage all the available tools they can use to create and replicate the testing environments they need to get this important research completed. Without it, we will continue to have a primarily inaccessible and not inclusive technology landscape that will never be disrupted.

Note: I use “identity first” disability language (as in “disabled people”) rather than “people first” language (as in “people with disabilities”). Identity first language aligns with disability advocates who see disability as a human trait description or even community and not a subject to be avoided or shamed. For more, review “Writing Respectfully: Person-First and Identity-First Language”.

Accessibility-focused Research In All Phases

When people advocate that UX Research should include disabled participants, it’s often with the mindset that this will happen on the final product once development is complete. One primary reason is because that’s when researchers have access to the most accessible artifact with which to run the study. However,

The real ability to ensure an accessible and inclusive system is not by evaluating a final product at the end of a project; it’s by assessing user needs at the start and then evaluating the iterative prototypes along the way.

Prototype Research Should Include Disabled Participants

In general, the iterative prototype phase of a project is when teams explore various design options and make decisions that will influence the final project outcome. Gathering feedback from representative users during this phase can help teams make informed decisions, including key pivots before significant development and testing resources are used.

During the prototype phase of user testing, the representative users should include disabled participants. By collecting feedback and perspectives of people with a variety of disabilities in early design testing phases, teams can more thoughtfully incorporate key considerations and supplement accessibility guidelines with real-world feedback. This early-and-often approach is the best way to include accessibility and inclusivity into a process and ensure a more accessible final product.

If you instead wait to include disabled participants in research until a product is near final, this inevitably leads to patchwork fixes of any critical feedback. Then, for feedback not deemed critical, it will likely get “backlogged” where the item priorities compete with new feature updates. With this approach, you’ll constantly be playing catch-up rather than getting it right up front and in an elegant and integrated way.

Accessibility Research Can’t Wait Until The End

Not only does research with disabled participants often occur too late in a project, but it is also far too often viewed as separate from other research studies (sometimes referred to as the “main research”). It cannot be understated that this reinforces the notion of separate-and-not-equal as compared to non-disabled participants and other stakeholder feedback. This has a severe negative impact on how a team will view the priority of inclusive design and, more broadly, the value of disabled people. That is, this reinforces “ableism”, a devaluing of disabled people in society.

UX Research with diverse participants that include a wide variety of disabilities can go a long way in dismantling ableist views and creating vitally needed inclusive technology.

The problem is that even when a team is on board with the idea, it’s not always easy to do inclusive research, particularly when involving prototypes. While discovery research can be conducted with minimal tooling and summative research can leverage fully built and accessible systems, prototype research quickly reveals severe accessibility barriers that feel like they can’t be overcome.

Inaccessible Technology Impedes Accessibility Research

Most technology we use has accessibility barriers for users with disabilities. As an example, the WebAIM Million report consistently finds that 96% of web homepages have accessibility errors that are fixable and preventable.

Just like websites, web, and mobile applications are similarly inaccessible, including those that produce early-stage prototypes. Thus, the artifacts researchers might want to use for prototype testing to help create accessible products are themselves inaccessible, creating a barrier for disabled research participants. It quickly becomes a vicious cycle that seems hard to break.

The Limitations Of Figma

Currently, the most popular industry tool for initial prototyping is Figma. These files become the artifacts researchers use to conduct a research study. However, these files often fall short of being accessible enough for many participants with disabilities.

To be clear, I absolutely applaud the Figma employees who have worked very hard on including screen reader support and keyboard functionality in Figma prototypes. This represents significant progress towards removing accessibility barriers in our core products and should not be overlooked. Nevertheless, there are still limitations and even blockers to research.

For one, the Figma files must be created in a way that will mimic the website layout and code. For example, for screen reader navigation to be successful, the elements need to be in their correct reading order in the Layers panel (not solely look correct visually), include labeled elements such as buttons (not solely items styled to look like buttons), and include alternative text for images. Often, however, designers do not build iterative prototypes with these considerations in mind, which prevents the keyboard from navigating correctly and the screen reader from providing the necessary details to comprehend the page.

In addition, Figma’s prototypes do not have selectable, configurable text. This prevents key visual adjustments such as browser zoom to increase text size, dark mode, which is easier for some to view, and selecting text to have it read aloud. If a participant needs these kinds of adjustments (or others I list in the table below), a Figma prototype will not be accessible to them.

Table: Figma prototype limitations per assistive technology

Assistive Technology Disability Category Limitation
Keyboard-only navigation Mobility Must use proper element type (such as button or input) in expected page order to ensure operability
Screen reader Vision Must include structure to ensure readability:
  • Including elements in logical order to ensure correct reading order
  • Alternative text added to images
  • Descriptive names added for buttons
Dark mode/High contrast mode Low Vision
Neurodiversity
Not available
Browser zoom Low Vision
Neurodiversity
Mobility
Not available
Screen reader used with mouse hover
Read aloud software with text selection
Vision
Neurodiversity
Cannot be used
Voice control
Switch control device
Mobility Cannot be used

Inclusive Research Is Needed Regardless

Having accessibility challenges with a prototype doesn’t mean we give up on the research. Instead, it means we need to get creative in our approach. This research is too important to keep waiting for the ideal set-up, particularly when our findings are often precisely what’s needed to create accessible technology.

Part of crafting a research study is determining what artifact to use during the study. Thus, when considering prototype research, it is a matter of creating the artifact best suited for your study. If this isn’t going to be, say, a Figma file you receive from designers, then consider what else can be used to get the job done.

Working Around the Current State

Being able to include diverse perspectives from disabled research participants throughout a project’s creation is possible and necessary. Keeping in mind your research questions and the capabilities of your participants, there are research methods and strategies that can be made accessible to gather authentic feedback during the critical prototype design phase.

With that in mind, I propose five ways you can accomplish prototype research while working around inaccessible prototypes:

  1. Use a survey.
  2. Conduct a co-design session.
  3. Test with a similar system.
  4. Build your own rapid prototype.
  5. Use the Wizard of Oz method.

Use a Survey Instead

Not all research questions at this phase need a full working prototype to be answered, particularly if they are about the general product features or product wording and not the visual design. Oftentimes, a survey tool or similar type of evaluation can be just as effective.

For example, you can confirm a site’s navigation options are intuitive by describing a scenario with a list of navigation choices while also testing if key content is understandable by confirming the user’s next steps based on a passage of text.

Image description
+

Acme Company Website Survey

Complete this questionnaire to help us determine if our site will be understandable.

  1. Scenario: You want to find out this organization's mission statement. Which menu option do you choose?
    [List of radio buttons]
    • Home
    • About
    • Resources
    • Find an Office
    • Search
  2. The following describes directions for applying to our grant. After reading, answer the following question:

    The Council’s Grant serves to advance Acme's goals by sponsoring community events. In determining whether to fund an event, the Council also considers factors including, but not limited to:
    • Target audiences
    • Alignment with the Council’s goals and objectives
    • Evaluations measuring participant satisfaction
To apply, download the form below.

Based on this wording, what would you include in your grant application?
[Input Field]

Just be sure you build a WCAG-compliant survey that includes accessible form layouts and question types. This will ensure participants can navigate using their assistive technologies. For example, Qualtrics has a specific form layout that is built to be accessible, or check out these accessibility tips for Google Forms. If sharing a document, note features that will enhance accessibility, such as using the ribbon for styling in Microsoft Word.

Tip: To find accessibility documentation for the software you’re using, search in your favorite search engine for the product name plus the word “accessibility” to find a product’s accessibility documentation.

Conduct Co-design Sessions

The prototyping phase might be a good time to utilize co-design and participatory design methods. With these methods, you can co-create designs with participants using any variety of artifacts that match the capabilities of your participants along with your research goals. The feedback can range from high-level workflows to specific visual designs, and you can guide the conversation with mock-ups, equivalent systems, or more creative artifacts such as storyboards that illustrate a scenario for user reaction.

For the prototype artifacts, these can range from low- to high-fidelity. For instance, participants without mobility or vision impairments can use paper-and-pencil sketching or whiteboarding. People with somewhat limited mobility may prefer a tablet-based drawing tool, such as using an Apple pencil with an iPad. Participants with visual impairments may prefer more 3-dimensional tools such as craft supplies, modeling clay, and/or cardboard. Or you may find that simply working on a collaborative online document offers the best accessibility as users can engage with their personalized assistive technology to jot down ideas.

Notably, the types of artifacts you use will be beneficial across differing user groups. In fact, rather than limiting the artifacts, try to offer a variety of ways to provide feedback by default. By doing this, participants can feel more empowered and engaged by the activity while also reassuring them you have created an inclusive environment. If you’re not sure what options to include, feel free to confirm what methods will work best as you recruit participants. That is, as you describe the primary activity when they are signing up, you can ask if the materials you have will be operable for the participant or allow them to tell you what they prefer to use.

The discussion you have and any supplemental artifacts you use then depend on communication styles. For example, deaf participants may need sign language interpreters to communicate their views but will be able to see sample systems, while blind participants will need descriptions of key visual information to give feedback. The actual study facilitation comes down to who you are recruiting and what level of feedback you are seeking; from there, you can work through the accommodations that will work best.

I conducted two co-design sessions at two different project phases while exploring how to create a wearable blind pedestrian navigation device. Early in the project, when we were generally talking about the feature set, we brought in several low-fidelity supplies, including a Braille label maker, cardboard, clay, Velcro, clipboards, tape, paper, and pipe cleaners. Based on user feedback, I fashioned a clipboard hanging from pipe cleaners as one prototype.

Later in the project when we were discussing the size and weight, we taped together Arduino hardware pieces representing the features identified by the participants. Both outcomes are pictured below and featured in a paper entitled, “What Not to Wearable: Using Participatory Workshops to Explore Wearable Device Form Factors for Blind Users.”

Ultimately, the benefit of this type of study is the participant-led feedback. In this way, participants are giving unfiltered feedback that is less influenced by designers, which may lead to more thoughtful design in the end.

Test With an Equivalent System

Very few projects are completely new creations, and often, teams use an existing site or application for project inspiration. Consider using similar existing systems and equivalent scenarios for your testing instead of creating a prototype.

By using an existing live system, participants can then use their assistive technology and adaptive techniques, which can make the study more accessible and authentic. Also, the study findings can range from the desirability of the available product features to the accessibility and usability of individual page elements. These lessons can then inform what design and code decisions to make in your system.

One caveat is to be aware of any accessibility barriers in that existing system. Particularly for website and web applications, you can look for accessibility documentation to determine if the company has reported any WCAG-conformance accessibility efforts, use tools like WAVE to test the system yourself, and/or mimic how your participants will use the system with their assistive technology. If there are workarounds for what you find, you may be able to avoid certain parts of the application or help users navigate past the inaccessible parts. However, if the site is going to be completely unusable for your participants, this won’t be a viable option for you.

If the system is usable enough for your testing, however, you can take the testing a step further by making updates on the fly if you or someone you collaborate with has engineering experience. For example, you can manipulate a website’s code with developer tools to add, subtract, or change the elements and styling on a page in real-time. (See “About browser developer tools”.) This can further enhance the feedback you give to your teams as it may more closely match your team’s intended design.

Build a Rapid Website Prototype

Notably, when conducting research focused on physical devices and hardware, you will not face the same obstacles to inaccessibility as with websites and web applications. You can use a variety of materials to create your prototypes, from cardboard to fabric to 3D printed material. I’ve sewn haptic vibration modules to a makeshift leather bracelet when working with wearables, for instance.

However, for web testing, it may be necessary to build a rapid prototype, especially to work around inaccessible artifacts such as a Figma file. This will include using a site builder that allows you to quickly create a replica of your team’s website. To create an accessible website, you’ll need a site builder with accessibility features and capabilities; I recommend WordPress, SquareSpace, Webflow, and Google Sites.

I recently used Google Sites to create a replica of a client’s draft pages in a matter of hours. I was adamant we should include disabled participants in feedback loops early and often, and this included after a round of significant visual design and content decisions. The web agency building the client’s site used Figma but not with the required formatting to use the built-in screen reader functionality. Rather than leave out blind user feedback at such a crucial time in the project, I started with a similar Google Sites template, took a best guess at how to structure the elements such as headings, recreated the anticipated column and card layouts as best I could, and used placeholder images with projected alt text instead of their custom graphics.

The screen reader testing turned into an impromptu co-design session because I could make changes in-the-moment to the live site for the participant to immediately test out. For example, we determined that some places where I used headings were not necessary, and we talked about image alt text in detail. I was able to add specific design and code feedback to my report, as well as share the live site (and corresponding code) with the team for comparison.

The downside to my prototype was that I couldn’t create the exact 1-to-1 visual design to use when testing with the other disabled participants who were sighted. I wanted to gather feedback on colors, fonts, and wording, so I also recruited low vision and neurodiverse participants for the study. However, my data was skewed because those participants couldn’t make the visual adjustments they needed to fully take in the content, such as recoloring, resizing, and having text read aloud. This was unfortunate, but we at least used the prototype to spark discussions of what does make a page accessible for them.

You may find you are limited in how closely you can replicate the design based on the tools you use or lack of access to developer assistance. When facing these limitations, consider what is most important to evaluate and determine if a paired-down version of the site will still give you valuable feedback over no site at all.

Use Wizard of Oz

The Wizard of Oz (WoZ) research method involves the facilitators mimicking system interactions in place of a fully working system. With WoZ, you can create your system’s approximate functionality using equivalent accessible tools and processes.

As an example, I’ll refer you to the talk by an Ally Financial research team that used this method for participants who used screen readers. They pre-programmed screen reader prompts into a clickable spreadsheet and had participants describe aloud what keyboard actions they would take to then trigger the corresponding prompt. While not the ideal set-up for the participants or researchers, it at least brought screen reader user feedback (and recognition of the users themselves) to the early design phases of their work. For more, review their detailed talk “Removing bias with wizard of oz screen reader usability testing”.

This isn’t just limited to screen reader testing, however. In fact, I’ve also often used Wizard of Oz for Voice User Interface (VUI) design. For instance, when I helped create an Alexa “skill” (their name for an app on Amazon speech-enabled devices), our prototype wouldn’t be ready in time for user testing. So, I drafted an idea to use a Bluetooth speaker to announce prompts from a clickable spreadsheet instead. When participants spoke a command to the speaker (thinking it was an Alexa device), the facilitator would select the appropriate pre-recorded prompt or a generic “I don’t understand” message.

Any system can be mimicked when you break down its parts and pieces and think about the ultimate interaction for the user. Creating WoZ set-ups can take creativity and even significant time to put together, but the outcomes can be worth it, particularly for longer-term projects. Once the main pieces are created, the prototype set-up can be edited and reused indefinitely, including during the study or between participants. Also, the investment in an easily edited prototype pays off exponentially if it uncovers something prior to finishing the entire product. In fact, that’s the main goal of this phase of testing: to help teams know what to look out for before they go through the hard work of finishing the product.

Inclusive Research Can No Longer Wait

Much has been documented about inclusive design to help teams craft technology for the widest possible audience. From the Web Content Accessibility Guidelines that help define what it means to be accessible to the Microsoft Inclusive Design Toolkits that tell the human stories behind the guidelines, there is much to learn even before a product begins.

However, the best approach is with direct user feedback. With this, we must recognize the conundrum many researchers are facing: We want to include disabled participants in UX research prior to a product being complete, but often, prototypes we have available for testing are inaccessible. This means testing with something that is essentially broken and will negatively impact our findings.

While it may feel like researchers will always be at a disadvantage if we don’t have the tools we need for testing, I think, instead, it’s time for us to push back. I propose we do this on two fronts:

  1. We make the research work as best we can in the current state.
  2. We advocate for the tools we need to make this more streamlined.

The key is to get disabled perspectives on the record and in the dataset of team members making the decisions. By doing this, hopefully, we shift the culture to wanting and valuing this feedback and bringing awareness to what it takes to make it happen.

Ideally, the awareness raised from our bootstrap efforts will lead to more people helping reduce the current prototype barriers. For some of us, this means urging companies to prioritize accessibility features in their roadmaps. For those working within influential prototype companies, it can mean getting much-needed backing to innovate better in this area.

The current state of our inaccessible digital ecosystem can sometimes feel like an entanglement too big to unravel. However, we must remain steadfast and insist that this does not remain the status quo; disabled users are users, and their diverse and invaluable perspectives must be a part of our research outcomes at all phases.

]]>
hello@smashingmagazine.com (Michele Williams)
<![CDATA[Using AI For Neurodiversity And Building Inclusive Tools]]> https://smashingmagazine.com/2024/04/ai-neurodiversity-building-inclusive-tools/ https://smashingmagazine.com/2024/04/ai-neurodiversity-building-inclusive-tools/ Wed, 24 Apr 2024 15:00:00 GMT In 1998, Judy Singer, an Australian sociologist working on biodiversity, coined the term “neurodiversity.” It means every individual is unique, but sometimes this uniqueness is considered a deficit in the eyes of neuro-typicals because it is uncommon. However, neurodiversity is the inclusivity of these unique ways of thinking, behaving, or learning.

Humans have an innate ability to classify things and make them simple to understand, so neurodivergence is classified as something different, making it much harder to accept as normal.

“Why not propose that just as biodiversity is essential to ecosystem stability, so neurodiversity may be essential for cultural stability?”

— Judy Singer

Culture is more abstract in the context of biodiversity; it has to do with values, thoughts, expectations, roles, customs, social acceptance, and so on; things get tricky.

Discoveries and inventions are driven by personal motivation. Judy Singer started exploring the concept of neurodiversity because her daughter was diagnosed with autism. Autistic individuals are people who are socially awkward but are very passionate about particular things in their lives. Like Judy, we have a moral obligation as designers to create products everyone can use, including these unique individuals. With the advancement of technology, inclusivity has become far more important. It should be a priority for every company.

As AI becomes increasingly tangled in our technology, we should also consider how being more inclusive will help, mainly because we must recognize such a significant number. AI allows us to design affordable, adaptable, and supportive products. Normalizing the phenomenon is far easier with AI, and it would help build personalized tools, reminders, alerts, and usage of language and its form.

We need to remember that these changes should not be made only for neurodiverse individuals; it would help everyone. Even neurotypicals have different ways of grasping information; some are kinesthetic learners, and others are auditory or visual.

Diverse thinking is just a different way of approaching and solving problems. Remember, many great minds are neurodiverse. Alan Turing, who cracked the code of enigma machines, was autistic. Fun fact: he was also the one who built the first AI machine. Steve Jobs, the founder and pioneer design thinker, had dyslexia. Emma Watson, famously known for her role as Hermione Granger from the Harry Potter series, has Attention-Deficit/Hyperactivity Disorder (ADHD). There are many more innovators and disruptors out there who are different.

Neurodivergence is a non-medical umbrella term.) used to classify brain function, behavior, and processing, which is different from normal. Let’s also keep in mind that these examples and interpretations are meant to shed some light on the importance of the neglected topic. It should be a reminder for us to invest further and investigate how we can make this rapidly growing technology in favor of this group as we try to normalize neurodiversity.

Types Of Neurodiversities
  • Autism: Autism spectrum disorder (ASD) is a neurological and developmental disorder that affects how people interact with others, communicate, learn, and behave.
  • Learning Disabilities
    The common learning disabilities:
  • Attention-Deficit/Hyperactivity Disorder (ADHD): An ongoing pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development.
Making AI Technology More Neuro-inclusive

Artificial Intelligence (AI) enables machines to think and perform tasks. However, this thinking is based on algorithmic logic, and that logic is based on multiple examples, books, and information that AI uses to generate the resulting output. The network of information that AI mimics is just like our brains; it is called a neural network, so data processing is similar to how we process information in our brains to solve a problem.

We do not need to do anything special for neurodiversity, which is the beauty of AI technology in its current state. Everything already exists; it is the usage of the technology that needs to change.

There are many ways we could improve it. Let’s look at four ways that are crucial to get us started.

Workflow Improvements

For: Autistic and ADHD
Focus: Working memory

Gartner found that 80% of executives think automation can be applied to any business decision. Businesses realized that a tactical approach is less successful than a strategic approach to using AI. For example, it can support business decisions that would otherwise require a lot of manual research.

AI has played a massive role in automating various tasks till now and will continue to do so in the future; it helps users reduce the time they spend on repetitive aspects of their jobs. It saves users a lot of time to focus their efforts on things that matter. Mundane tasks get stacked in the working memory; however, there is a limit: humans can keep up to 3–5 ideas simultaneously. If there are more than five ideas at play, humans ought to forget or miss something unless they document it. When completing these typical but necessary tasks, it becomes time-consuming and frustrating for users to focus on their work. This is especially troublesome for neurodivergent employees.

Autistic and ADHD users might have difficulty following through or focusing on aspects of their work, especially if it does not interest them. Straying thoughts is not uncommon; it makes it even harder to concentrate. Autistic individuals are hyper-focused, preventing them from grasping other relevant information. On the contrary, ADHD users lose focus quickly as their attention span is limited, so their working memory takes a toll.

AI could identify this and help users overcome it. Improving and automating the workflow will allow them to focus on the critical tasks. It means less distractions and more direction. Since they have trouble with working memory, allowing the tool to assist them in capturing moments to help recall later would benefit them greatly.

Example That Can Be Improved

Zoom recently launched its AI companion. When a user joins a meeting as a host, they can use this tool for various actions. One of those actions is to summarize the meeting. It auto-generates meeting notes at the end and shares them. AI companion is an excellent feature for automating notes in the meeting, allowing all the participants to not worry about taking notes.

Opportunity: Along with the auto-generated notes, Zoom should allow users to take notes in-app and use them in their summaries. Sometimes, users get tangent thoughts or ideas that could be useful, and they can create notes. It should also allow users to choose the type of summary they want, giving them more control over it, e.g., short, simplified, or list. AI could also personalize this content to allow participants to comprehend it in their own way. Autistic users would benefit from their hyper-focused attention in the meeting. ADHD users can still capture those stray thoughts, which the AI will summarize in the notes. Big corporations usually are more traditional with incremental improvements. Small tech companies have less to lose, so we often see innovation there.

Neurodivergent Friendly Example

Fireflies.ai is an excellent example of how neuro-inclusivity can be considered, and it covers all the bases Zoom falls short of. It auto-generates meeting notes. It also allows participants to take notes, which are then appended to the auto-generated summary: this summary can be in a bullet list or a paragraph. The tool can also transcribe from the shared slide deck within the summary. It shares audio snippets of important points alongside the transcription. The product can support neurodivergent users far better.

Natural Language Processing

For: Autistic, Learning Disabilities, and ADHD
Focus: Use simple words and give emotional assistance

Words have different meanings for all. Some might understand the figurative language, but others might get offended by the choice of it. If this is so common with a neurotypical, imagine how tricky it will be for a neurodivergent. Autistic users have difficulty understanding metaphorical language and empathizing with others. Learning disabilities will have trouble with language, especially figurative language, which perplexes them. ADHD users have a short attention span, and using complex sentences would mean they will lose interest.

Using simple language aids users far better than complex sentence constructions for neurodivergent. Metaphors, jargon, or anecdotal information might be challenging to interpret and frustrate them. The frustration could avert them from pursuing things that they feel are complex. Providing them with a form of motivation by allowing them to understand and grow will enable them to pursue complexities confidently. AI could help multifold by breaking down the complex into straightforward language.

Example That Can Be Improved

Grammarly is a great tool for correcting and recommending language changes. It has grammatical and Grammarly-defined rules based on which the app makes recommendations. It also has a feature that allows users to select the tone of voice or goals, casual or academic style, enhancing the written language to the expectation. Grammarly also lets organizations define style guides; it could help the user write based on the organization’s expectations.

Opportunity: Grammarly still needs to implement a gen AI assistive technology, but that might change in the future. Large learning models (LLM) can further convert the text into inclusive language considering cultural and regional relevance. Most presets are specific to the rules Grammarly or the organization has defined, which is limiting. Sentimental analysis is still not a part of their rules. For example, if the write-up is supposed to be negative, the app recommends changing or making it positive.

Neurodivergent Friendly Example

Writer is another beautiful product that empowers users to follow guidelines established by the organization and, obviously, the grammatical rules. It provides various means to rewrite sentences that make sense, e.g., simplify, polish, shorten, and so on. Writers also assist with sentence reconstruction and recommendation based on the type of content the user writes, for instance, an error or a tooltip. Based on those features and many more under the gen AI list, Writer can perform better for neurodivergent users.

Cognitive Assistance

For: Autistic, Learning Disabilities, and ADHD
Focus: Suggestive technology

Equality Act 2010 was established to bring workplace equality with legislation on neurodiversity. Employers need to understand the additional needs of neurodivergent employees and make amendments to existing policies to incorporate them. The essence of the Equality Act can be translated into actionable digital elements to bring equality of usage of products.

Neurodiverse or not, cognitive differences are present in both groups. The gap becomes more significant when we talk about them separately. Think about it: all AI assistive technologies are cognition supplements.

Cognoassist did a study to understand cognition within people. They found that less than 10% of them score within a typical range of assessment. It proves that the difference is superficial, even if it is observable.

Cognition is not just intelligence but a runway of multiple mental processes, irrespective of the neural inclination. It is just a different way of cognition and reproduction than normal. Nonetheless, neurodivergent users need assistive technologies more than neuro-typicals; it fills the gap quickly. This will allow them to function at the same level by making technology more inclusive.

Example That Can Be Improved

ClickUp is a project management tool that has plenty of automation baked into it. It allows users to automate or customize their daily routine, which helps everyone on the team to focus on their goals. It also lets users connect various productivity and management apps to make it a seamless experience and a one-stop shop for everything they need. The caveat is that the automation is limited to some actions.

Opportunity: Neurodivergent users sometimes need more cognitive assistance than neuro-typicals. Initiating and completing tasks is difficult, and a push could help them get started or complete them. The tool could also help them with organization, benefiting them greatly. Autistic individuals prefer to complete a task in one go, while ADHD people like to mix it up as they get the necessary break from each task and refocus. An intelligent AI system could help users by creating more personalized planned days and a to-do list to get things started.

Neurodivergent Friendly Example

Motion focuses on planning and scheduling the user’s day to help with their productivity goals. When users connect their calendars to this tool, they can schedule their meetings with AI by considering heads-down time or focused attention sessions based on each user’s requirement. The user can personalize their entire schedule according to their liking. The tool will proactively schedule incoming meetings or make recommendations on time. This AI assistive technology also aids them with planning around deadlines.

Adaptive Onboarding

For: Learning Disabilities and ADHD
Focus: Reduce Frustration

According to Epsilon, 80% of consumers want a personalized experience. All of these personalization experiences are to make the user’s workflow easier. These personalized experiences start from the introduction to the usage of the product. Onboarding helps users learn about the product, but learning continues after the initial product presentation.

We cannot expect users to know about the product once the onboarding has been completed and they need assistance in the future. Over time, if users have a hard time comprehending or completing a task, they get frustrated; this is particularly true for ADHD users. At the same time, users with learning disabilities do not remember every step either because they are too complex or have multiple steps.

Adaptive onboarding will allow everyone to re-learn when needed; it would benefit them more since help is available when needed. This type of onboarding could be AI-driven and much more generative. It could focus on different learning styles, either assistive, audio, or video presentation.

Example That Can Be Improved:

Product Fruits has a plethora of offerings, including onboarding. It offers personalization and the ability to tailor the onboarding to cover the product for new users. Allowing customization with onboarding gives the product team more control over what needs attention. It also provides the capability to track product usage based on the onboarding.

Opportunity: Offering AI interventions for different personas or segments will give the tool an additional layer of experience tailored to the needs of individuals. Imagine a user with ADHD who is trying to figure out how to use the feature; they will get frustrated if they do not identify how to use it. What if the tool intuitively nudges the user on how to complete the task? Similarly, if completing the task is complex and requires multiple steps, users with learning disabilities have difficulty following and reproducing it.

Neurodivergent Friendly Example

Onboarding does not always need to be at the start of the product introduction. Users always end up in situations where they need to find a step in the feature of completing a task but might have difficulty discovering it. In such cases, they usually seek help by asking colleagues or looking it up on the product help page.

Chameleon helps by offering features that let users use AI more effectively. Users can ask for help anytime, and the AI will generate answers to help them.

Considerations

All the issues I mentioned are present in everyone; the difference is the occurrence and intensity between neurotypical and neurodiverse individuals. Everyday things, discussions, conclusions, critical thinking, comprehension, and so on, are vastly different. It is like neurodiverse individuals’ brains are wired differently. It becomes more important to build tools that solve problems for neurodiverse users, which we inadvertently solve for everyone.

An argument that every human goes through those problems is easy to make. But, we tend to forget the intensity and criticality of those problems for neurodiverse individuals, which is far too complex than shrugging it off like neuro-typicals who can adapt to it much more quickly. Similarly, AI too has to learn and understand the problems it needs to solve. It can be confusing for the algorithm to learn unless it does not have multiple examples.

Large Language Models (LLM) are trained on vast amounts of data, such as ChatGPT, for example. It is accurate most of the time; however, sometimes, it hallucinates and gives an inaccurate answer. That might be a considerable problem when no additional guidelines exist except for the LLM. As mentioned above, there is still a possibility in most cases, but having the company guidelines and information would help give correct results.

It could also mean the users will be more dependent on AI, and there is no harm in it. If neurodiverse individuals need assistance, there cannot be a human present all the time carrying the patience required every time. Being direct is an advantage of AI, which is helpful in the case of their profession.

Conclusion

Designers should create efficient workflows for neurodivergent users who are having difficulty with working memory, comprehending complex language, learning intricate details, and so on. AI could help by providing cognitive assistance and adaptive technologies that benefit neurodivergent users greatly. Neurodiversity should be considered in product design; it needs more attention.

AI has become increasingly tied in every aspect of the user’s lives. Some are obvious, like conversational UI, chatbots, and so on, while others are hidden algorithms like recommendation engines.

Many problems specific to accessibility are being solved, but are they being solved while keeping neurodiverse issues in mind?

Jamie Diamon famously said:

“Problems don’t age well.”

— Jamie Diamon (CEO, JP Morgan)

This means we have to take critical issues into account sooner. Building an inclusive world for those 1.6 billion people is not a need for the future but a necessity of the present. We should strive to create an inclusive world for neurodiverse users; it is especially true because AI is booming, and making it inclusive now would be easy as it will scale into a behemoth set of features in every aspect of our lives in the future.

]]>
hello@smashingmagazine.com (Pratik Joglekar)
<![CDATA[F-Shape Pattern And How Users Read]]> https://smashingmagazine.com/2024/04/f-shape-pattern-how-users-read/ https://smashingmagazine.com/2024/04/f-shape-pattern-how-users-read/ Tue, 23 Apr 2024 10:00:00 GMT Smart Interface Design Patterns.]]> We rarely read on the web. We mostly scan. That’s a reliable strategy to quickly find what we need in times when we’re confronted with more information than we can handle. But scanning also means that we often skip key details. This is not only inefficient but can also be very damaging to your business.

Let’s explore how users read — or scan — on the web and how we can prevent harmful scanning patterns.

This article is part of our ongoing series on design patterns. It’s also an upcoming part of the 10h-video library on Smart Interface Design Patterns 🍣 and the upcoming live UX training as well. Use code BIRDIE to save 15% off.

Scanning Patterns On The Web

One of the most common scanning patterns is the F-shape pattern. Users start at the top left, read a few lines, and then start to scan vertically. But it isn’t the only scanning pattern on the web. Being aware of different patterns is the first step to helping users better navigate your content.

Different patterns describe how users scan content on the web. The F-shape pattern is probably the best-known one.

  • F-Pattern
    Users first read horizontally, then read less and less until they start scanning vertically. The first lines of text and the first words on each line receive more attention. It also applies to LTR-interfaces, but the F is flipped.
  • Layer-Cake Pattern
    Users scan consistently across headings, with deliberate jumps into body text in between. Most effective way to scan pages and find key content details.
  • Love-at-First-Sight Pattern
    Users are often “satisficers,” searching for what’s good enough, not exhaustive enough. In search results, they often fixate on a single result.
  • Lawn-Mower Pattern
    In tables, users start in the top left cell, move to the right until the end of the row, and then drop down to the next row, moving in the same pattern.
  • Spotted Pattern
    Skipping big chunks of text and focusing on patterns. Often happens in search when users look for specific words, shapes, links, dates, and so on.
  • Marking Pattern
    Eyes focus in one place as the mouse scrolls or a finger swipes. Common on mobile more than on desktop.
  • Bypassing Pattern
    Users deliberately skip the first words of the line when multiple lines start with the same word.
  • Commitment Pattern
    Reading the entire content, word by word. Happens when users are highly motivated and interested. Common for older adults.
F-Shape Scanning And The Lack Of Rhythm

On the web, we often argue about the fold, and while it does indeed exist, it really doesn’t matter. As Christopher Butler said, “length is not the problem — lack of rhythm is.”

A designer’s main job is to direct attention intentionally. Scanning is partial attention. Reading is focused attention. A screen without intentional rhythm will lose attention as it is being scanned. One with controlled rhythm will not only retain attention, it will deepen it.

Think of F-shape scanning as a user’s fallback behavior if the design doesn’t guide them through the content well enough. So prevent it whenever you can. At least, give users anchors to move to E-shape scanning, and at best, direct their attention to relevant sections with Layer-Cake scanning.

Direct Attention And Provide Anchors

Good formatting can reduce the impact of scanning. To structure scanning and guide a user’s view, add headings and subheadings. For engagement, alternate sizes, spacing, and patterns. For landing pages, alternate points of interest.

Users spend 80% of the time viewing the left half of a page. So, as you structure your content, keep in mind that horizontal attention leans left. That’s also where you might want to position navigation to aid wayfinding.

Generally, it’s a good idea to visually group small chunks of related content. To offer anchors, consider front-loading headings with keywords and key points — it will help users quickly make sense of what awaits them. Adding useful visuals can also give users points to anchor to.

Another way to guide users through the page is by adding few but noticeable accents to guide attention. You will need visible, well-structured headings and subheadings that stand out from the other content on the page. In fact, adding subheadings throughout the page might be the best strategy to help users find information faster.

Data-heavy content such as large, complex tables require some extra attention and care. To help users keep their position as they move across the table, keep headers floating. They provide an anchor no matter where your user’s eyes are focusing and make it easier to look around and compare data.

Key Takeaways
  • Users spend 80% of time viewing the left half of a page.
  • They read horizontally, then skip to content below.
  • Scanning is often inefficient as users miss large chunks of content and skip key details.
  • Good formatting reduces the impact of F-scanning.
  • Add heading and subheadings for structured scanning.
  • Show keywords and key points early in your headings to improve scanning.
  • For engagement, alternate sizes, spacing, patterns.
  • For landing pages, alternate points of interest.
  • Visually group small chunks of related content.
  • Keep headers floating in large, complex data tables.
  • Add useful visuals to give users points to anchor to.
  • Horizontal attention leans left: favor top/left navigation.

Useful Resources

Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[How To Work With GraphQL In WordPress In 2024]]> https://smashingmagazine.com/2024/04/how-work-graphql-wordpress-2024/ https://smashingmagazine.com/2024/04/how-work-graphql-wordpress-2024/ Fri, 19 Apr 2024 10:00:00 GMT Three years ago, I published “Making GraphQL Work In WordPress,” where I compared the two leading GraphQL servers available for WordPress at the time: WPGraphQL and Gato GraphQL. In the article, I aimed to delineate the scenarios best suited for each.

Full disclosure: I created Gato GraphQL, originally known as GraphQL API for WordPress, as referenced in the article.

A lot of new developments have happened in this space since my article was published, and it’s a good time to consider what’s changed and how it impacts the way we work with GraphQL data in WordPress today.

This time, though, let’s focus less on when to choose one of the two available servers and more on the developments that have taken place and how both plugins and headless WordPress, in general, have been affected.

Headless Is The Future Of WordPress (And Shall Always Be)

There is no going around it: Headless is the future of WordPress! At least, that is what we have been reading in posts and tutorials for the last eight or so years. Being Argentinian, this reminds me of an old joke that goes, “Brazil is the country of the future and shall always be!” The future is both imminent and far away.

Truth is, WordPress sites that actually make use of headless capabilities — via GraphQL or the WP REST API — represent no more than a small sliver of the overall WordPress market. WPEngine may have the most extensive research into headless usage in its “The State of Headless” report. Still, it’s already a few years old and focused more on both the general headless movement (not just WordPress) and the context of enterprise organizations. But the future of WordPress, according to the report, is written in the clouds:

“Headless is emphatically here, and with the rapid rise in enterprise adoption from 2019 (53%) to 2021 (64%), it’s likely to become the industry standard for large-scale organizations focused on building and maintaining a powerful, connected digital footprint. […] Because it’s already the most popular CMS in the world, used by many of the world’s largest sites, and because it’s highly compatible as a headless CMS, bringing flexibility, extensibility, and tons of features that content creators love, WordPress is a natural fit for headless configurations.”

Just a year ago, a Reddit user informally polled people in r/WordPress, and while it’s far from scientific, the results are about as reliable as the conjecture before it:

Headless may very well be the future of WordPress, but the proof has yet to make its way into everyday developer stacks. It could very well be that general interest and curiosity are driving the future more than tangible works, as another of WPEngine’s articles from the same year as the bespoke report suggests when identifying “Headless WordPress” as a hot search term. This could just as well be a lot more smoke than fire.

That’s why I believe that “headless” is not yet a true alternative to a traditional WordPress stack that relies on the WordPress front-end architecture. I see it more as another approach, or flavor, to building websites in general and a niche one at that.

That was all true merely three years ago and is still true today.

WPEngine “Owns” Headless WordPress

It’s no coincidence that we’re referencing WPEngine when discussing headless WordPress because the hosting company is heavily betting on it becoming the de facto approach to WordPress development.

Take, for instance, WPEngine’s launch of Faust.js, a headless framework with WPGraphQL as its foundation. Faust.js is an opinionated framework that allows developers to use WordPress as the back-end content management system and Next.js to render the front-end side of things. Among other features, Faust.js replicates the WordPress template system for Next.js, making the configuration to render posts and pages from WordPress data a lot easier out of the box.

WPEngine is well-suited for this task, as it can offer hosting for both Node.js and WordPress as a single solution via its Atlas platform. WPEngine also bought the popular Advanced Custom Fields (ACF) plugin that helps define relationships among entities in the WordPress data model. Add to that the fact that WPEngine has taken over the Headless WordPress Discord server, with discussions centered around WPGraphQL, Faust, Atlas, and ACF. It could very well be named the WPEngine-Powered Headless WordPress server instead.

But WPEngine’s agenda and dominance in the space is not the point; it’s more that they have a lot of skin in the game as far as anticipating a headless WordPress future. Even more so now than three years ago.

GraphQL API for WordPress → Gato GraphQL

I created a plugin several years ago called GraphQL API for WordPress to help support headless WordPress development. It converts data pulled from the WordPress REST API into structured GraphQL data for more efficient and flexible queries based on the content managed and stored in WordPress.

More recently, I released a significantly updated version of the plugin, so updated that I chose to rename it to Gato GraphQL, and it is now freely available in the WordPress Plugin Directory. It’s a freemium offering like many WordPress plugin pricing models. The free, open-source version in the plugin directory provides the GraphQL server, maps the WordPress data model into the GraphQL schema, and provides several useful features, including custom endpoints and persisted queries. The paid commercial add-on extends the plugin by supporting multiple query executions, automation, and an HTTP client to interact with external services, among other advanced features.

I know this sounds a lot like a product pitch but stick with me because there’s a point to the decision I made to revamp my existing GraphQL plugin and introduce a slew of premium services as features. It fits with my belief that

WordPress is becoming more and more open to giving WordPress developers and site owners a lot more room for innovation to work collaboratively and manage content in new and exciting ways both in and out of WordPress.

JavaScript Frameworks & Headless WordPress

Gatsby was perhaps the most popular and leading JavaScript framework for creating headless WordPress sites at the time my first article was published in 2021. These days, though, Gatsby is in steep decline and its integration with WordPress is no longer maintained.

Next.js was also a leader back then and is still very popular today. The framework includes several starter templates designed specifically for headless WordPress instances.

SvelteKit and Nuxt are surging these days and are considered good choices for establishing headless WordPress, as was discussed during WordCamp Asia 2024.

Today, in 2024, we continue to see new JavaScript framework entrants in the space, notably Astro. Despite Gatsby’s recent troubles, the landscape of using JavaScript frameworks to create front-end experiences from the WordPress back-end is largely the same as it was a few years ago, if maybe a little easier, thanks to the availability of new templates that are integrated right out of the box.

GraphQL Transcends Headless WordPress

The biggest difference between the WPGraphQL and Gato GraphQL plugins is that, where WPGraphQL is designed to convert REST API data into GraphQL data in a single direction, Gato GraphQL uses GraphQL data in both directions in a way that can be used to manage non-headless WordPress sites as well. I say this not as a way to get you to use my plugin but to help describe how GraphQL has evolved to the point where it is useful for more cases than headless WordPress sites.

Managing a WordPress site via GraphQL is possible because GraphQL is an agnostic tool for interacting with data, whatever that interaction may be. GraphQL can fetch data from the server, modify it, store it back on the server, and invoke external services. These interactions can all be coded within a single query.

GraphQL can then be used to regex search and replace a string in all posts, which is practical when doing site migrations. We can also import a post from another WordPress site or even from an RSS feed or CSV source.

And thanks to the likes of WordPress hooks and WP-Cron, executing a GraphQL query can be an automated task. For instance, whenever the publish_post hook is triggered — i.e., a new post on the site is published — we can execute certain actions, like an email notification to the site admin, or generate a featured image with AI if the post lacks one.

In short, GraphQL works both ways and opens up new possibilities for better developer and author experiences!

GraphQL Becomes A “Core” Feature In WordPress 6.5

I have gone on record saying that GraphQL should not be a core part of WordPress. There’s a lot of reasoning behind my opinion, but what it boils down to is that the WP REST API is perfectly capable of satisfying our needs for passing data around, and adding GraphQL to the mix could be a security risk in some conditions.

My concerns aside, GraphQL officially became a first-class citizen of WordPress when it was baked into WordPress 6.5 with the introduction of Plugin Dependencies, a feature that allows plugins to identify other plugins as dependencies. We see this in the form of a new “Requires Plugins” comment in a plugin’s header:

/**
 * Plugin Name: My Ecommerce Payments for Gato GraphQL
 * Requires Plugins: gatographql
 */

WordPress sees which plugins are needed for the current plugin to function properly and installs everything together at the same time, assuming that the dependencies are readily available in the WordPress Plugin Directory.

So, check this out. Since WPGraphQL and Gato GraphQL are in the plugin directory, we can now create another plugin that internally uses GraphQL and distributes it via the plugin directory or, in general, without having to indicate how to install it. For instance, we can now use GraphQL to fetch data to render the plugin’s blocks.

In other words, plugins are now capable of more symbiotic relationships that open even more possibilities! Beyond that, every plugin in the WordPress Plugin Directory is now technically part of WordPress Core, including WPGraphQL and Gato GraphQL. So, yes, GraphQL is now technically a “core” feature that can be leveraged by other developers.

Helping WordPress Lead The CMS Market, Again

While delivering the keynote presentation during WordCamp Asia 2024, Human Made co-founder Noel Tock discussed the future of WordPress. He argues that WordPress growth has stagnated in recent years, thanks to a plethora of modern web services capable of interacting and resulting in composable content management systems tailored to certain developers in a way that WordPress simply isn’t.

Tock continues to explain how WordPress can once again become a growth engine by cleaning up the WordPress plugin ecosystem and providing first-class integrations with external services.

Do you see where I am going with this? GraphQL could play an instrumental role in WordPress’s future success. It very well could be the link between WordPress and all the different services it interacts with, positioning WordPress at the center of the web. The recent Plugin Dependencies feature we noted earlier is a peek at what WordPress could look like as it adopts more composable approaches to content management that support its position as a market leader.

Conclusion

“Headless” WordPress is still “the future” of WordPress. But as we’ve discussed, there’s very little actual movement towards that future as far as developers buying into it despite displaying deep interest in headless architectures, with WordPress purely playing the back-end role.

There are new and solid frameworks that rely on GraphQL for querying data, and those won’t go away anytime soon. And those frameworks are the ones that rely on existing WordPress plugins that consume data from the WordPress REST API and convert it to structured GraphQL data.

Meanwhile, WordPress is making strides toward greater innovation as plugin developers are now able to leverage other plugins as dependencies for their plugins. Every plugin listed in the WordPress Plugin Directory is essentially a feature of WordPress Core, including WPGraphQL and Gato GraphQL. That means GraphQL is readily available for any plugin developer to tap into as of WordPress 6.5.

GraphQL can be used not only for headless but also to manage the WordPress site. Whenever data must be transformed, whether locally or by invoking an external service, GraphQL can be the tool to do it. That even means that data transforms can be triggered automatically to open up new and interesting ways to manage content, both inside and outside of WordPress. It works both ways!

So, yes, even though headless is the future of WordPress (and shall always be), GraphQL could indeed be a key component in making WordPress once again an innovative force that shapes the future of CMS.

]]>
hello@smashingmagazine.com (Leonardo Losoviz)
<![CDATA[Converting Plain Text To Encoded HTML With Vanilla JavaScript]]> https://smashingmagazine.com/2024/04/converting-text-encoded-html-vanilla-javascript/ https://smashingmagazine.com/2024/04/converting-text-encoded-html-vanilla-javascript/ Wed, 17 Apr 2024 13:00:00 GMT When copying text from a website to your device’s clipboard, there’s a good chance that you will get the formatted HTML when pasting it. Some apps and operating systems have a “Paste Special” feature that will strip those tags out for you to maintain the current style, but what do you do if that’s unavailable?

Same goes for converting plain text into formatted HTML. One of the closest ways we can convert plain text into HTML is writing in Markdown as an abstraction. You may have seen examples of this in many comment forms in articles just like this one. Write the comment in Markdown and it is parsed as HTML.

Even better would be no abstraction at all! You may have also seen (and used) a number of online tools that take plainly written text and convert it into formatted HTML. The UI makes the conversion and previews the formatted result in real time.

Providing a way for users to author basic web content — like comments — without knowing even the first thing about HTML, is a novel pursuit as it lowers barriers to communicating and collaborating on the web. Saying it helps “democratize” the web may be heavy-handed, but it doesn’t conflict with that vision!

We can build a tool like this ourselves. I’m all for using existing resources where possible, but I’m also for demonstrating how these things work and maybe learning something new in the process.

Defining The Scope

There are plenty of assumptions and considerations that could go into a plain-text-to-HTML converter. For example, should we assume that the first line of text entered into the tool is a title that needs corresponding <h1> tags? Is each new line truly a paragraph, and how does linking content fit into this?

Again, the idea is that a user should be able to write without knowing Markdown or HTML syntax. This is a big constraint, and there are far too many HTML elements we might encounter, so it’s worth knowing the context in which the content is being used. For example, if this is a tool for writing blog posts, then we can limit the scope of which elements are supported based on those that are commonly used in long-form content: <h1>, <p>, <a>, and <img>. In other words, it will be possible to include top-level headings, body text, linked text, and images. There will be no support for bulleted or ordered lists, tables, or any other elements for this particular tool.

The front-end implementation will rely on vanilla HTML, CSS, and JavaScript to establish a small form with a simple layout and functionality that converts the text to HTML. There is a server-side aspect to this if you plan on deploying it to a production environment, but our focus is purely on the front end.

Looking At Existing Solutions

There are existing ways to accomplish this. For example, some libraries offer a WYSIWYG editor. Import a library like TinyMCE with a single <script> and you’re good to go. WYSIWYG editors are powerful and support all kinds of formatting, even applying CSS classes to content for styling.

But TinyMCE isn’t the most efficient package at about 500 KB minified. That’s not a criticism as much as an indication of how much functionality it covers. We want something more “barebones” than that for our simple purpose. Searching GitHub surfaces more possibilities. The solutions, however, seem to fall into one of two categories:

  • The input accepts plain text, but the generated HTML only supports the HTML <h1> and <p> tags.
  • The input converts plain text into formatted HTML, but by ”plain text,” the tool seems to mean “Markdown” (or a variety of it) instead. The txt2html Perl module (from 1994!) would fall under this category.

Even if a perfect solution for what we want was already out there, I’d still want to pick apart the concept of converting text to HTML to understand how it works and hopefully learn something new in the process. So, let’s proceed with our own homespun solution.

Setting Up The HTML

We’ll start with the HTML structure for the input and output. For the input element, we’re probably best off using a <textarea>. For the output element and related styling, choices abound. The following is merely one example with some very basic CSS to place the input <textarea> on the left and an output <div> on the right:

See the Pen Base Form Styles [forked] by Geoff Graham.

You can further develop the CSS, but that isn’t the focus of this article. There is no question that the design can be prettier than what I am providing here!

Capture The Plain Text Input

We’ll set an onkeyup event handler on the <textarea> to call a JavaScript function called convert() that does what it says: convert the plain text into HTML. The conversion function should accept one parameter, a string, for the user’s plain text input entered into the <textarea> element:

<textarea onkeyup='convert(this.value);'></textarea>

onkeyup is a better choice than onkeydown in this case, as onkeyup will call the conversion function after the user completes each keystroke, as opposed to before it happens. This way, the output, which is refreshed with each keystroke, always includes the latest typed character. If the conversion is triggered with an onkeydown handler, the output will exclude the most recent character the user typed. This can be frustrating when, for example, the user has finished typing a sentence but cannot yet see the final punctuation mark, say a period (.), in the output until typing another character first. This creates the impression of a typo, glitch, or lag when there is none.

In JavaScript, the convert() function has the following responsibilities:

  1. Encode the input in HTML.
  2. Process the input line-by-line and wrap each individual line in either a <h1> or <p> HTML tag, whichever is most appropriate.
  3. Process the output of the transformations as a single string, wrap URLs in HTML <a> tags, and replace image file names with <img> elements.

And from there, we display the output. We can create separate functions for each responsibility. Let’s name them accordingly:

  1. html_encode()
  2. convert_text_to_HTML()
  3. convert_images_and_links_to_HTML()

Each function accepts one parameter, a string, and returns a string.

Encoding The Input Into HTML

Use the html_encode() function to HTML encode/sanitize the input. HTML encoding refers to the process of escaping or replacing certain characters in a string input to prevent users from inserting their own HTML into the output. At a minimum, we should replace the following characters:

  • < with &lt;
  • > with &gt;
  • & with &amp;
  • ' with &#39;
  • " with &quot;

JavaScript does not provide a built-in way to HTML encode input as other languages do. For example, PHP has htmlspecialchars(), htmlentities(), and strip_tags() functions. That said, it is relatively easy to write our own function that does this, which is what we’ll use the html_encode() function for that we defined earlier:

function html_encode(input) {
  const textArea = document.createElement("textarea");
  textArea.innerText = input;
  return textArea.innerHTML.split("<br>").join("\n");
}

HTML encoding of the input is a critical security consideration. It prevents unwanted scripts or other HTML manipulations from getting injected into our work. Granted, front-end input sanitization and validation are both merely deterrents because bad actors can bypass them. But we may as well make them work a little harder.

As long as we are on the topic of securing our work, make sure to HTML-encode the input on the back end, where the user cannot interfere. At the same time, take care not to encode the input more than once. Encoding text that is already HTML-encoded will break the output functionality. The best approach for back-end storage is for the front end to pass the raw, unencoded input to the back end, then ask the back-end to HTML-encode the input before inserting it into a database.

That said, this only accounts for sanitizing and storing the input on the back end. We still have to display the encoded HTML output on the front end. There are at least two approaches to consider:

  1. Convert the input to HTML after HTML-encoding it and before it is inserted into a database.
    This is efficient, as the input only needs to be converted once. However, this is also an inflexible approach, as updating the HTML becomes difficult if the output requirements happen to change in the future.
  2. Store only the HTML-encoded input text in the database and dynamically convert it to HTML before displaying the output for each content request.
    This is less efficient, as the conversion will occur on each request. However, it is also more flexible since it’s possible to update how the input text is converted to HTML if requirements change.
Applying Semantic HTML Tags

Let’s use the convert_text_to_HTML() function we defined earlier to wrap each line in their respective HTML tags, which are going to be either <h1> or <p>. To determine which tag to use, we will split the text input on the newline character (\n) so that the text is processed as an array of lines rather than a single string, allowing us to evaluate them individually.

function convert_text_to_HTML(txt) {
  // Output variable
  let out = '';
  // Split text at the newline character into an array
  const txt_array = txt.split("\n");
  // Get the number of lines in the array
  const txt_array_length = txt_array.length;
  // Variable to keep track of the (non-blank) line number
  let non_blank_line_count = 0;

  for (let i = 0; i < txt_array_length; i++) {
    // Get the current line
    const line = txt_array[i];
    // Continue if a line contains no text characters
    if (line === ''){
      continue;
    }

    non_blank_line_count++;
    // If a line is the first line that contains text
    if (non_blank_line_count === 1){
      // ...wrap the line of text in a Heading 1 tag
      out += &lt;h1&gt;${line}&lt;/h1&gt;;
      // ...otherwise, wrap the line of text in a Paragraph tag.
    } else {
      out += &lt;p&gt;${line}&lt;/p&gt;;
    }
  }

  return out;
}

In short, this little snippet loops through the array of split text lines and ignores lines that do not contain any text characters. From there, we can evaluate whether a line is the first one in the series. If it is, we slap a <h1> tag on it; otherwise, we mark it up in a <p> tag.

This logic could be used to account for other types of elements that you may want to include in the output. For example, perhaps the second line is assumed to be a byline that names the author and links up to an archive of all author posts.

Tagging URLs And Images With Regular Expressions

Next, we’re going to create our convert_images_and_links_to_HTML() function to encode URLs and images as HTML elements. It’s a good chunk of code, so I’ll drop it in and we’ll immediately start picking it apart together to explain how it all works.


function convert_images_and_links_to_HTML(string){
  let urls_unique = [];
  let images_unique = [];
  const urls = string.match(/https*:\/\/[^\s<),]+[^\s<),.]/gmi) ?? [];
  const imgs = string.match(/[^"'>\s]+.(jpg|jpeg|gif|png|webp)/gmi) ?? [];

  const urls_length = urls.length;
  const images_length = imgs.length;

  for (let i = 0; i < urls_length; i++){
    const url = urls[i];
    if (!urls_unique.includes(url)){
      urls_unique.push(url);
    }
  }

  for (let i = 0; i < images_length; i++){
    const img = imgs[i];
    if (!images_unique.includes(img)){
      images_unique.push(img);
    }
  }

  const urls_unique_length = urls_unique.length;
  const images_unique_length = images_unique.length;

  for (let i = 0; i < urls_unique_length; i++){
    const url = urls_unique[i];
    if (images_unique_length === 0 || !images_unique.includes(url)){
      const a_tag = &lt;a href="${url}" target="&#95;blank"&gt;${url}&lt;/a&gt;;
      string = string.replace(url, a_tag);
    }
  }

  for (let i = 0; i < images_unique_length; i++){
    const img = images_unique[i];
    const img_tag = &lt;img src="${img}" alt=""&gt;;
    const img_link = &lt;a href="${img}"&gt;${img&#95;tag}&lt;/a&gt;;
    string = string.replace(img, img_link);
  }
  return string;
}

Unlike the convert_text_to_HTML() function, here we use regular expressions to identify the terms that need to be wrapped and/or replaced with <a> or <img> tags. We do this for a couple of reasons:

  1. The previous convert_text_to_HTML() function handles text that would be transformed to the HTML block-level elements <h1> and <p>, and, if you want, other block-level elements such as <address>. Block-level elements in the HTML output correspond to discrete lines of text in the input, which you can think of as paragraphs, the text entered between presses of the Enter key.
  2. On the other hand, URLs in the text input are often included in the middle of a sentence rather than on a separate line. Images that occur in the input text are often included on a separate line, but not always. While you could identify text that represents URLs and images by processing the input line-by-line — or even word-by-word, if necessary — it is easier to use regular expressions and process the entire input as a single string rather than by individual lines.

Regular expressions, though they are powerful and the appropriate tool to use for this job, come with a performance cost, which is another reason to use each expression only once for the entire text input.

Remember: All the JavaScript in this example runs each time the user types a character, so it is important to keep things as lightweight and efficient as possible.

I also want to make a note about the variable names in our convert_images_and_links_to_HTML() function. images (plural), image (singular), and link are reserved words in JavaScript. Consequently, imgs, img, and a_tag were used for naming. Interestingly, these specific reserved words are not listed on the relevant MDN page, but they are on W3Schools.

We’re using the String.prototype.match() function for each of the two regular expressions, then storing the results for each call in an array. From there, we use the nullish coalescing operator (??) on each call so that, if no matches are found, the result will be an empty array. If we do not do this and no matches are found, the result of each match() call will be null and will cause problems downstream.

const urls = string.match(/https*:\/\/[^\s<),]+[^\s<),.]/gmi) ?? [];
const imgs = string.match(/[^"'>\s]+.(jpg|jpeg|gif|png|webp)/gmi) ?? [];

Next up, we filter the arrays of results so that each array contains only unique results. This is a critical step. If we don’t filter out duplicate results and the input text contains multiple instances of the same URL or image file name, then we break the HTML tags in the output. JavaScript does not provide a simple, built-in method to get unique items in an array that’s akin to the PHP array_unique() function.

The code snippet works around this limitation using an admittedly ugly but straightforward procedural approach. The same problem is solved using a more functional approach if you prefer. There are many articles on the web describing various ways to filter a JavaScript array in order to keep only the unique items.

We’re also checking if the URL is matched as an image before replacing a URL with an appropriate <a> tag and performing the replacement only if the URL doesn’t match an image. We may be able to avoid having to perform this check by using a more intricate regular expression. The example code deliberately uses regular expressions that are perhaps less precise but hopefully easier to understand in an effort to keep things as simple as possible.

And, finally, we’re replacing image file names in the input text with <img> tags that have the src attribute set to the image file name. For example, my_image.png in the input is transformed into <img src='my_image.png'> in the output. We wrap each <img> tag with an <a> tag that links to the image file and opens it in a new tab when clicked.

There are a couple of benefits to this approach:

  • In a real-world scenario, you will likely use a CSS rule to constrain the size of the rendered image. By making the images clickable, you provide users with a convenient way to view the full-size image.
  • If the image is not a local file but is instead a URL to an image from a third party, this is a way to implicitly provide attribution. Ideally, you should not rely solely on this method but, instead, provide explicit attribution underneath the image in a <figcaption>, <cite>, or similar element. But if, for whatever reason, you are unable to provide explicit attribution, you are at least providing a link to the image source.

It may go without saying, but “hotlinking” images is something to avoid. Use only locally hosted images wherever possible, and provide attribution if you do not hold the copyright for them.

Before we move on to displaying the converted output, let’s talk a bit about accessibility, specifically the image alt attribute. The example code I provided does add an alt attribute in the conversion but does not populate it with a value, as there is no easy way to automatically calculate what that value should be. An empty alt attribute can be acceptable if the image is considered “decorative,” i.e., purely supplementary to the surrounding text. But one may argue that there is no such thing as a purely decorative image.

That said, I consider this to be a limitation of what we’re building.

Displaying the Output HTML

We’re at the point where we can finally work on displaying the HTML-encoded output! We've already handled all the work of converting the text, so all we really need to do now is call it:

function convert(input_string) {
  output.innerHTML = convert_images_and_links_to_HTML(convert_text_to_HTML(html_encode(input_string)));
}

If you would rather display the output string as raw HTML markup, use a <pre> tag as the output element instead of a <div>:

<pre id='output'></pre>

The only thing to note about this approach is that you would target the <pre> element’s textContent instead of innerHTML:

function convert(input_string) {
  output.textContent = convert_images_and_links_to_HTML(convert_text_to_HTML(html_encode(input_string)));
}
Conclusion

We did it! We built one of the same sort of copy-paste tool that converts plain text on the spot. In this case, we’ve configured it so that plain text entered into a <textarea> is parsed line-by-line and encoded into HTML that we format and display inside another element.

See the Pen Convert Plain Text to HTML (PoC) [forked] by Geoff Graham.

We were even able to keep the solution fairly simple, i.e., vanilla HTML, CSS, and JavaScript, without reaching for a third-party library or framework. Does this simple solution do everything a ready-made tool like a framework can do? Absolutely not. But a solution as simple as this is often all you need: nothing more and nothing less.

As far as scaling this further, the code could be modified to POST what’s entered into the <form> using a PHP script or the like. That would be a great exercise, and if you do it, please share your work with me in the comments because I’d love to check it out.

References

]]>
hello@smashingmagazine.com (Alexis Kypridemos)
<![CDATA[How To Monitor And Optimize Google Core Web Vitals]]> https://smashingmagazine.com/2024/04/monitor-optimize-google-core-web-vitals/ https://smashingmagazine.com/2024/04/monitor-optimize-google-core-web-vitals/ Tue, 16 Apr 2024 10:00:00 GMT This article is a sponsored by DebugBear

Google’s Core Web Vitals initiative has increased the attention website owners need to pay to user experience. You can now more easily see when users have poor experiences on your website, and poor UX also has a bigger impact on SEO.

That means you need to test your website to identify optimizations. Beyond that, monitoring ensures that you can stay ahead of your Core Web Vitals scores for the long term.

Let’s find out how to work with different types of Core Web Vitals data and how monitoring can help you gain a deeper insight into user experiences and help you optimize them.

What Are Core Web Vitals?

There are three web vitals metrics Google uses to measure different aspects of website performance:

  • Largest Contentful Paint (LCP),
  • Cumulative Layout Shift (CLS),
  • Interaction to Next Paint (INP).

Largest Contentful Paint (LCP)

The Largest Contentful Paint metric is the closest thing to a traditional load time measurement. However, LCP doesn’t track a purely technical page load milestone like the JavaScript Load Event. Instead, it focuses on what the user can see by measuring how soon after opening a page, the largest content element on the page appears.

The faster the LCP happens, the better, and Google rates a passing LCP score below 2.5 seconds.

Cumulative Layout Shift (CLS)

Cumulative Layout Shift is a bit of an odd metric, as it doesn’t measure how fast something happens. Instead, it looks at how stable the page layout is once the page starts loading. Layout shifts mean that content moves around, disorienting the user and potentially causing accidental clicks on the wrong UI element.

The CLS score is calculated by looking at how far an element moved and how big the element is. Aim for a score below 0.1 to get a good rating from Google.

Interaction to Next Paint (INP)

Even websites that load quickly often frustrate users when interactions with the page feel sluggish. That’s why Interaction to Next Paint measures how long the page remains frozen after user interaction with no visual updates.

Page interactions should feel practically instant, so Google recommends an INP score below 200 milliseconds.

What Are The Different Types Of Core Web Vitals Data?

You’ll often see different page speed metrics reported by different tools and data sources, so it’s important to understand the differences. We’ve published a whole article just about that, but here’s the high-level breakdown along with the pros and cons of each one:

  • Synthetic Tests
    These tests are run on-demand in a controlled lab environment in a fixed location with a fixed network and device speed. They can produce very detailed reports and recommendations.
  • Real User Monitoring (RUM)
    This data tells you how fast your website is for your actual visitors. That means you need to install an analytics script to collect it, and the reporting that’s available is less detailed than for lab tests.
  • CrUX Data
    Google collects from Chrome users as part of the Chrome User Experience Report (CrUX) and uses it as a ranking signal. It’s available for every website with enough traffic, but since it covers a 28-day rolling window, it takes a while for changes on your website to be reflected here. It also doesn’t include any debug data to help you optimize your metrics.
Start By Running A One-Off Page Speed Test

Before signing up for a monitoring service, it’s best to run a one-off lab test with a free tool like Google’s PageSpeed Insights or the DebugBear Website Speed Test. Both of these tools report with Google CrUX data that reflects whether real users are facing issues on your website.

Note: The lab data you get from some Lighthouse-based tools — like PageSpeed Insights — can be unreliable.

INP is best measured for real users, where you can see the elements that users interact with most often and where the problems lie. But a free tool like the INP Debugger can be a good starting point if you don’t have RUM set up yet.

How To Monitor Core Web Vitals Continuously With Scheduled Lab-Based Testing

Running tests continuously has a few advantages over ad-hoc tests. Most importantly, continuous testing triggers alerts whenever a new issue appears on your website, allowing you to start fixing them right away. You’ll also have access to historical data, allowing you to see exactly when a regression occurred and letting you compare test results before and after to see what changed.

Scheduled lab tests are easy to set up using a website monitoring tool like DebugBear. Enter a list of website URLs and pick a device type, test location, and test frequency to get things running:

As this process runs, it feeds data into the detailed dashboard with historical Core Web Vitals data. You can monitor a number of pages on your website or track the speed of your competition to make sure you stay ahead.

When regression occurs, you can dive deep into the results using DebuBears’s Compare mode. This mode lets you see before-and-after test results side-by-side, giving you context for identifying causes. You see exactly what changed. For example, in the following case, we can see that HTTP compression stopped working for a file, leading to an increase in page weight and longer download times.

How To Monitor Real User Core Web Vitals

Synthetic tests are great for super-detailed reporting of your page load time. However, other aspects of user experience, like layout shifts and slow interactions, heavily depend on how real users use your website. So, it’s worth setting up real user monitoring with a tool like DebugBear.

To monitor real user web vitals, you’ll need to install an analytics snippet that collects this data on your website. Once that’s done, you’ll be able to see data for all three Core Web Vitals metrics across your entire website.

To optimize your scores, you can go into the dashboard for each individual metric, select a specific page you’re interested in, and then dive deeper into the data.

For example, you can see whether a slow LCP score is caused by a slow server response, render blocking resources, or by the LCP content element itself.

You’ll also find that the LCP element varies between visitors. Lab test results are always the same, as they rely on a single fixed screen size. However, in the real world, visitors use a wide range of devices and will see different content when they open your website.

INP is tricky to debug without real user data. Yet an analytics tool like DebugBear can tell you exactly what page elements users are interacting with most often and which of these interactions are slow to respond.

Thanks to the new Long Animation Frames API, we can also see specific scripts that contribute to slow interactions. We can then decide to optimize these scripts, remove them from the page, or run them in a way that does not block interactions for as long.

Conclusion

Continuously monitoring Core Web Vitals lets you see how website changes impact user experience and ensures you get alerted when something goes wrong. While it’s possible to measure Core Web Vitals using a wide range of tools, those tools are limited by the type of data they use to evaluate performance, not to mention they only provide a single snapshot of performance at a specific point in time.

A tool like DebugBear gives you access to several different types of data that you can use to troubleshoot performance and optimize your website, complete with RUM capabilities that offer a historial record of performance for identifying issues where and when they occur. Sign up for a free DebugBear trial here.

]]>
hello@smashingmagazine.com (Matt Zeunert)
<![CDATA[Sliding 3D Image Frames In CSS]]> https://smashingmagazine.com/2024/04/sliding-3d-image-frames-css/ https://smashingmagazine.com/2024/04/sliding-3d-image-frames-css/ Fri, 12 Apr 2024 18:00:00 GMT ` using clever CSS techniques that demonstrate advanced, modern styling practices.]]> In a previous article, we played with CSS masks to create cool hover effects where the main challenge was to rely only on the <img> tag as our markup. In this article, pick up where we left off by “revealing” the image from behind a sliding door sort of thing — like opening up a box and finding a photograph in it.

This is because the padding has a transition that goes from s - 2*b to 0. Meanwhile, the background transitions from 100% (equivalent to --s) to 0. There’s a difference equal to 2*b. The background covers the entire area, while the padding covers less of it. We need to account for this.

Ideally, the padding transition would take less time to complete and have a small delay at the beginning to sync things up, but finding the correct timing won’t be an easy task. Instead, let’s increase the padding transition’s range to make it equal to the background.

img {
  --h: calc(var(--s) - var(--b));
  padding-top: min(var(--h), var(--s) - 2*var(--b));
  transition: --h 1s linear;
}
img:hover {
  --h: calc(-1 * var(--b));
}

The new variable, --h, transitions from s - b to -b on hover, so we have the needed range since the difference is equal to --s, making it equal to the background and clip-path transitions.

The trick is the min() function. When --h transitions from s - b to s - 2*b, the padding is equal to s - 2*b. No padding changes during that brief transition. Then, when --h reaches 0 and transitions from 0 to -b, the padding remains equal to 0 since, by default, it cannot be a negative value.

It would be more intuitive to use clamp() instead:

padding-top: clamp(0px, var(--h), var(--s) - 2*var(--b));

That said, we don’t need to specify the lower parameter since padding cannot be negative and will, by default, be clamped to 0 if you give it a negative value.

We are getting much closer to the final result!

First, we increase the border’s thickness on the left and bottom sides of the image:

img {
  --b: 10px; /* the image border */
  --d: 30px; /* the depth */

  border: solid #0000;
  border-width: var(--b) var(--b) calc(var(--b) + var(--d)) calc(var(--b) + var(--d));
}

Second, we add a conic-gradient() on the background to create darker colors around the box:

background: 
  conic-gradient(at left var(--d) bottom var(--d),
   #0000 25%,#0008 0 62.5%,#0004 0) 
  var(--c);

Notice the semi-transparent black color values (e.g., #0008 and #0004). The slight bit of transparency blends with the colors behind it to create the illusion of a dark variation of the main color since the gradient is placed above the background color.

And lastly, we apply a clip-path to cut out the corners that establish the 3D box.

clip-path: polygon(var(--d) 0, 100% 0, 100% calc(100% - var(--d)), calc(100% - var(--d)) 100%, 0 100%, 0 var(--d));

See the Pen The image within a 3D box by Temani Afif.

Now that we see and understand how the 3D effect is built let’s put back the things we removed earlier, starting with the padding:

See the Pen Putting back the padding animation by Temani Afif.

It works fine. But note how we’ve introduced the depth (--d) to the formula. That’s because the bottom border is no longer equal to b but b + d.

--h: calc(var(--s) - var(--b) - var(--d));
padding-top: min(var(--h),var(--s) - 2*var(--b) - var(--d));

Let’s do the same thing with the linear gradient. We need to decrease its size so it covers the same area as it did before we introduced the depth so that it doesn’t overlap with the conic gradient:

See the Pen Putting back the gradient animation by Temani Afif.

We are getting closer! The last piece we need to add back in from earlier is the clip-path transition that is combined with the box-shadow. We cannot reuse the same code we used before since we changed the clip-path value to create the 3D box shape. But we can still transition it to get the sliding result we want.

The idea is to have two points at the top that move up and down to reveal and hide the box-shadow while the other points remain fixed. Here is a small video to illustrate the movement of the points.

See that? We have five fixed points. The two at the top move to increase the area of the polygon and reveal the box shadow.

img {
  clip-path: polygon(
    var(--d) 0, /* --> var(--d) calc(-1*(var(--s) - var(--d))) */
    100%     0, /* --> 100%     calc(-1*(var(--s) - var(--d))) */

    /* the fixed points */ 
    100% calc(100% - var(--d)), /* 1 */
    calc(100% - var(--d)) 100%, /* 2 */
    0 100%,                     /* 3 */
    0 var(--d),                 /* 4 */
    var(--d) 0);                /* 5 */
}

And we’re done! We’re left with a nice 3D frame around the image element with a cover that slides up and down on hover. And we did it with zero extra markup or reaching for pseudo-elements!

See the Pen 3D image with reveal effect by Temani Afif.

And here is the first demo I shared at the start of this article, showing the two sliding variations.

See the Pen Image gift box (hover to reveal) by Temani Afif.

This last demo is an optimized version of what we did together. I have written most of the formulas using the variable --h so that I only update one value on hover. It also includes another variation. Can you reverse-engineer it and see how its code differs from the one we did together?

One More 3D Example

Want another fancy effect that uses 3D effects and sliding overlays? Here’s one I put together using a different 3D perspective where the overlay splits open rather than sliding from one side to the other.

See the Pen Image gift box II (hover to reveal) by Temani Afif.

Your homework is to dissect the code. It may look complex, but if you trace the steps we completed for the original demo, I think you’ll find that it’s not a terribly different approach. The sliding effect still combines the padding, the object-* properties, and clip-path but with different values to produce this new effect.

Conclusion

I hope you enjoyed this little 3D image experiment and the fancy effect we applied to it. I know that adding an extra element (i.e., a parent <div> as a wrapper) to the markup would have made the effect a lot easier to achieve, as would pseudo-elements and translations. But we are here for the challenge and learning opportunity, right?

Limiting the HTML to only a single element allows us to push the limits of CSS to discover new techniques that can save us time and bytes, especially in those situations where you might not have direct access to modify HTML, like when you’re working in a CMS template. Don’t look at this as an over-complicated exercise. It’s an exercise that challenges us to leverage the power and flexibility of CSS.

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[Penpot’s CSS Grid Layout: Designing With Superpowers]]> https://smashingmagazine.com/2024/04/penpot-css-grid-layout-designing-superpowers/ https://smashingmagazine.com/2024/04/penpot-css-grid-layout-designing-superpowers/ Thu, 11 Apr 2024 08:00:00 GMT This article is a sponsored by Penpot

It was less than a year ago when I first had a chance to use Penpot and instantly got excited about it. They managed to build something that designers haven’t yet seen before — a modern, open-source tool for everyone. In the world of technology, that might not sound groundbreaking. After all, open-source tools and software are being taken for granted as a cornerstone of modern web development. But for some reason, not for design — until now. Penpot’s approach to building design software comes with a lot of good arguments. And it gathered a strong community.

One of the reasons why Penpot is so exciting is that it allows creators to build user interfaces in a visual environment, but using the same standards and technologies as the end product. It makes a design workflow easier on many levels. Today, we are going to focus on just one of them, building layouts.

Design tools went a long way trying to make it easier to design complex, responsive layouts and flexible, customizable components. Some of them tried to mimic the mechanisms used in web technologies and others tried to mimic these imitations. But such an approach will take you only so far.

Short History Of Web Layouts

So how are the layouts for the web built in practice?

If you’ve been around the industry long enough, you might remember the times when you used frames, tables, and floats to build layouts. And if you haven’t, you didn’t miss much. Just to give you a taste of how bad it was: same as exporting tiny images of rounded corners from ever-crashing Photoshop, just to meticulously position them in every corner of a rectangle so you could make a dull, rounded button, it was just a pain. Far too often, it was a pleasure to craft yet another amazing design — but so much tears and sorrow to actually implement it.

Then Flexbox came in and changed everything. And soon after it, Grid. Two powerful yet amazingly simple engines to build layouts that changed web developers’ lives forever.

Ironically, design tools never caught up. Flexbox and Grid opened an ocean of possibilities, yet gated behind a barrier of knowing how to code. None of the design tools ever implemented them so a larger audience of designers could leverage them in their workflows. Not until now.

Creating Layouts With Penpot

Penpot is becoming the first design tool to support both Flexbox and Grid in their toolkit. And by support, I don’t mean a layout feature that tries to copy what Flexbox or Grid has to offer. We’re talking about an actual implementation of Flexbox and Grid inside the design tool.

Penpot’s Flexbox implementation went public earlier this year. If you’d like to give it a try, last year, I wrote a separate article just about it. Now, Penpot is fully implementing both Flexbox and Grid.

You might be wondering why we need both. Couldn’t you just use Flexbox for everything, same as you use the same simple layout features in a design tool? Technically, yes, you could. In fact, most people do. (At the time of writing, only a quarter of websites worldwide use CSS Grid, but its adoption is steadily increasing; source)

So if you want to build simple, mostly linear layouts, Flexbox is probably all you’ll ever need. But if you want to gain some design superpowers? Learn Grid.

Penpot’s CSS Grid Layout is out now, so you can already give it a try. It’s a part of their major 2.0 release, bringing a bunch of long-awaited features and improvements. I’d strongly encourage you to give it a go and see how it works for yourself. And if you need some inspiration, keep reading!

CSS Grid Layout In Practice

As an example, let’s build a portfolio page that consists of a sidebar and a grid of pictures.

Creating A Layout

Our first step will be to create a simple two-dimensional grid. In this case using a Grid Layout makes more sense than Flex Layout as we want to have more granular control over how elements are laid out on multiple axes.

To make the grid even more powerful, you can merge cells, group them into functional areas, and name them. Here, we are going to create a dedicated area for the sidebar.

As you adjust the layout later, you can see that the sidebar always keeps the same width and full height of the design while other cells get adjusted to the available space.

Building Even More Complex Grids

That’s not all. Apart from merging cells of the grid, you can tell elements inside it to take multiple cells. On our portfolio page, we are going to use this to make the featured picture bigger than others and take four cells instead of one.

As a result, we created a complex, responsive layout that would be a breeze to turn it into a functional website but at the same time would be completely impossible to build in any other design tool out there. And that’s just a fraction of what Grid Layout can do.

Next Steps

I hope you liked this demo of Penpot's Grid Layout. If you’d like to play around with the examples used in this article, go ahead and duplicate this Penpot file. It’s a great template that explains all the ins and outs of using Grid in your designs!

In case you’re more of a video-learning type, there’s a great tutorial on Grid Layout you can watch now on YouTube. And if you need help at any point, the Penpot community will be more than happy to answer your questions.

Summary

Flexbox and Grid in Penpot open up opportunities to craft layouts like never before. Today, anyone can combine the power of Flex Layout and Grid Layout to create complex, sophisticated structures that are flexible, responsive, and ready to deploy out-of-the-box—all without writing a single line of code.

Working with the right technologies not only makes things easier, but it also just feels right. That's something I've always longed for in design tools. Adopting CSS as a standard for both designers and developers facilitates smoother collaboration and helps them both feel more at home in their workflows.

For designers, that’s also a chance to strengthen their skill set, which matters today more than ever. The design industry is a competitive space that keeps changing rapidly, and staying competitive is hard work. However, learning the less obvious aspects and gaining a better understanding of the technologies you work with might help you do that.

Try CSS Grid Layout And Share Your Thoughts!

If you decide to give CSS Grid Layout a try, don’t hesitate to share your experience! The team behind Penpot would love to hear your feedback. Being a completely free and open-source tool, Penpot’s development thrives thanks to its community and people like you.

]]>
hello@smashingmagazine.com (Mikołaj Dobrucki)
<![CDATA[Connecting With Users: Applying Principles Of Communication To UX Research]]> https://smashingmagazine.com/2024/04/applying-principles-communication-ux-research/ https://smashingmagazine.com/2024/04/applying-principles-communication-ux-research/ Tue, 09 Apr 2024 10:00:00 GMT Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.

In this article, I’ll focus on UX research. Communication is a core component of UX research, as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.

Fundamentals of Communication Theory

Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.

In this article, I will focus on the Transactional Model of Communication. There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.

The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback. Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.

Understanding the Transactional Model

Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:

  1. Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
  2. Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
  3. Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
  4. Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
  5. Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
  6. Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.
Applying the Transactional Model of Communication to Preparing for UX Research

We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:

  • Improve Clarity
    The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively.
  • Minimize misunderstanding
    By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated.
  • Enhance research participant participation
    With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.

You can address the specific elements of the Transactional Model through the following steps while preparing for research:

Defining the Sender and Receiver

In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.

For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.

Message Creation

The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.

Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.

Choosing the Right Channel

The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.

Minimizing Noise

Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.

Facilitating Feedback

You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.

Track Your Alignment to the Framework

You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.

You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.

Applying the Transactional Model to Common UX Research Methods

UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.

Interviews

Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.

Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.

Asking Open-ended Questions

To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.

Actively Listening

As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques, such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.

Being Responsive

Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.

Minimizing Noise

We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews. This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.

For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.

Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.

Strategizing the Flow

Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.

A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.

While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.

Context Considerations

You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.

For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.

Power Dynamics

Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.

We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible. You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.

Post-Interview Actions

You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:

  • Debriefing
    Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews.
  • Short surveys
    Send a brief survey via email or an online platform to gather feedback on the interview experience.
  • Follow-up calls
    Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted.
  • Thank you emails
    Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.

You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.

As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings.

We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.

By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.

Surveys

The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.

Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).

Designing the Survey

Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.

Crafting Questions

The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.

For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?”.

Minimizing Noise

Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.

Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.

When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.

For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.

You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.

The Transactional Model suggests active participation in communication is necessary for effective communication. Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.

This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.

Encouraging Feedback

While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.

You should be mindful that too many open-ended questions can cause fatigue, so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.

Post-Survey Actions

After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.

You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.

Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.

Usability Testing

Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.

Here are some ways a researcher can use elements of the Transactional Model during usability testing:

Task Assignment as Message Sending

When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.

You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.

Observing and Listening as Message Receiving

As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.

Real-time Interaction

The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).

Noise

In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.

The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.

One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.

Encouraging Feedback

Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols, where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.

You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.

We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.

Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model

The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.

The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.

Understanding Context

Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.

Deciphering Noise

In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.

Considering Sender-Receiver Dynamics

Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.

For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.

Reporting Results

Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:

  • Identify Stakeholder Groups
    Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives.
  • Prioritize Information
    Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact.
  • Adapt Communication Style
    Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.

Acknowledging Feedback

Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.

Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.

Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.

By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs.

Tracking Your Application of the Transactional Model to Your Practice

You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.

You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.

Stage Columns Description Example
Pre-Interview Planning Topic/Question (Aligned with research goals) Identify the research question and design questions that encourage open-ended responses and co-construction of meaning. Testing mobile banking app’s bill payment feature. How do you set up a new payee? How would you make a payment? What are your overall impressions?
Participant Context Note relevant demographic and personal information to tailor questions and avoid biased assumptions. 35-year-old working professional, frequent user of the online banking and mobile application but unfamiliar with using the app for bill pay.
Engagement Strategies Outline planned strategies for active listening, open-ended questions, clarification prompts, and building rapport. Open-ended follow-up questions (“Can you elaborate on XYZ? Or Please explain more to me what you mean by XYZ.”), active listening cues, positive reinforcement (“Thank you for sharing those details”).
Shared Understanding List potential challenges to understanding participant’s perspectives and strategies for ensuring shared meaning. Initially, the participant expressed some confusion about the financial jargon I used. I clarified and provided simpler [non-jargon] explanations, ensuring we were on the same page.
During Interview Verbal Cues Track participant’s language choices, including metaphors, pauses, and emotional expressions. Participant used a hesitant tone when describing negative experiences with the bill payment feature. When questioned, they stated it was “likely their fault” for not understanding the flow [it isn’t their fault].
Nonverbal Cues Note participant’s nonverbal communication like body language, facial expressions, and eye contact. Frowning and crossed arms when discussing specific pain points.
Researcher Reflexivity Record moments where your own biases or assumptions might influence the interview and potential mitigation strategies. Recognized my own familiarity with the app might bias my interpretation of users’ understanding [e.g., going slower than I would have when entering information]. Asked clarifying questions to avoid imposing my assumptions.
Power Dynamics Identify instances where power differentials emerge and actions taken to address them. Participant expressed trust in the research but admitted feeling hesitant to criticize the app directly. I emphasized anonymity and encouraged open feedback.
Unplanned Questions List unplanned questions prompted by the participant’s responses that deepen understanding. What alternative [non-bank app] methods for paying bills that you use? (Prompted by participant’s frustration with app bill pay).
Post-Interview Reflection Meaning Co-construction Analyze how both parties contributed to building shared meaning and insights. Through dialogue, we collaboratively identified specific design flaws in the bill payment interface and explored additional pain points and areas that worked well.
Openness and Flexibility Evaluate how well you adapted to unexpected responses and maintained an open conversation. Adapted questioning based on participant’s emotional cues and adjusted language to minimize technical jargon when that issue was raised.
Participant Feedback Record any feedback received from participants regarding the interview process and areas for improvement. Thank you for the opportunity to be in the study. I’m glad my comments might help improve the app for others. I’d be happy to participate in future studies.
Ethical Considerations Reflect on whether the interview aligned with principles of transparency, reciprocity, and acknowledging power dynamics. Maintained anonymity throughout the interview and ensured informed consent was obtained. Data will be stored and secured as outlined in the research protocol.
Key Themes/Quotes Use this column to identify emerging themes or save quotes you might refer to later when creating the report. Frustration with a confusing interface, lack of intuitive navigation, and desire for more customization options.
Analysis Notes Use as many lines as needed to add notes for consideration during analysis. Add notes here.

You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:

  • Date of Interview,
  • Participant ID,
  • Interview Format (e.g., in person, remote, video, phone).
Conclusion

By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.

As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.

As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice. By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.

Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.

References and Further Reading

]]>
hello@smashingmagazine.com (Victor Yocco)
<![CDATA[The Things Users Would Appreciate In Mobile Apps]]> https://smashingmagazine.com/2024/04/things-users-would-appreciate-mobile-apps/ https://smashingmagazine.com/2024/04/things-users-would-appreciate-mobile-apps/ Fri, 05 Apr 2024 12:00:00 GMT Remember the “mobile first” mantra? The idea was born out of the early days of responsive web design. Rather than design and build for the “desktop” up front, a “mobile-first” approach treats small screens as first-class citizens. There’s a reduced amount of real estate, certainly less than the number of pixels we get from the viewport of Firefox expanded fullscreen on a 27-inch studio monitor.

The constraint is a challenge to make sure that whatever is sent to mobile devices is directly relevant to what users should need; nothing more, nothing less. Anything more additive to the UI can be reserved for wider screens where we’re allowed to stretch out and make more liberal use of space.

/* A sample CSS snippet for a responsive main content */

/* Base Styles */
.main-content {
  container: main / inline-size;
}

.gallery {
  display: grid;
  gap: 1rem;
}

/* Container is wider than or equal to 700px */
@container main (width >= 700px) {
  .gallery {
    grid-template-columns: 1fr 1fr;
  }
}

/* Container is wider than or equal to 1200px */
@container main (width >= 1200px) {
  .gallery {
    grid-template-columns: repeat(4, 1fr);
  }
}

Now, I’m not here to admonish anyone who isn’t using a mobile-first approach when designing and building web interfaces. If anything, the last five or so years have shown us just how unpredictable of a medium the web is, including what sort of device is displaying our work all the way down to a user’s individual preferences.

Even so, there are things that any designer and developer should consider when working with mobile interfaces. Now that we’re nearly 15 years into responsive web design as a practice and craft, users are beginning to form opinions about what to expect when trying to do certain things in a mobile interface. I know that I have expectations. I am sure you have them as well.

I’ve been keeping track of the mobile interfaces I use and have started finding common patterns that feel mobile in nature but are more desktop-focused in practice. While keeping track of the mobile interfaces I use, I’ve found common patterns that are unsuitable for small screens and thus could use some updates. Here are some reworked features that are worth considering for mobile interfaces.

Economically-Sized Forms

There are myriad problems that come up while completing mobile forms — e.g., small tap targets, lack of offline support, and incorrect virtual keyboards, to name a few — but it’s how a mobile form interacts with the device’s virtual keyboard that draws my ire the most.

The keyboard obscures the form more times than not. You tap a form field, and the keyboard slides up, and — poof! — it’s as though half the form is chopped out of view. If you’re thinking, Meh, all that means is a little extra scrolling, consider that scrolling isn’t always a choice. If the page is a short one with only the form on it, it’s highly possible what you see on the screen is what you get.

A more delightful user experience for mobile forms is to take a “less is more” approach. Display one form field at a time for an economical layout that allows the field and virtual keyboard to co-exist in harmony without any visual obstructions. Focusing the design on the top half of the viewport with room for navigation controls and microcopy creates a seamless flow from one form input to the next.

More Room For Searching

Search presents a dichotomy: It is incredibly useful, yet is treated as an afterthought, likely tucked in the upper-right corner of a global header, out of view and often further buried by hiding the form input until the user clicks some icon, typically a magnifying glass. (It’s ironic we minimize the UI with a magnifying glass, isn’t it?)

The problem with burying search in a mobile context is two-fold:

  1. The feature is less apparent, and
  2. The space to enter a search query, add any filters, and display results is minimized.

That may very well be acceptable if the site has only a handful of pages to navigate. However, if the search allows a user to surface relevant content and freely move about an app, then you’re going to want to give it higher prominence.

Any service-oriented mobile app can improve user experience by providing a search box that’s immediately recognizable and with enough breathing room for tapping a virtual keyboard.

Some sites even have search forms that occupy the full screen without surrounding components, offering a “distraction-free” interface for typing queries.

No Drop-Downs, If Possible

The <select> element can negatively impact mobile UX in two glaring ways:

  1. An expanding <select> with so many options that it produces excessive scrolling.
  2. Scrolling excessively, particularly on long pages, produces an awkward situation where the page continues scrolling after already scrolling through the list of <option>s.

I’ll come across these situations, stare at my phone, and ask:

Do I really need a <select> populated with one hundred years to submit my birthdate?

Seems like an awful lot of trouble to sort through one hundred years compared to manually typing in a four-digit value myself.

A better pattern for making list selections, if absolutely needed, in a space-constrained mobile layout could be something closer to the ones used by iOS and Android devices by default. Sure, there’s scrolling involved, but within a confined space that leaves the rest of the UI alone.

A Restrained Overview

A dashboard overview in a mobile app is something that displays key data to users right off the bat, surfacing common information that the user would otherwise have to seek out. There are great use cases for dashboards, many of which you likely interact with daily, like your banking app, a project management system, or even a page showing today’s baseball scores. The WordPress dashboard is a perfect example, showing site activity, security checks, traffic, and more.

Dashboards are just fine. But the sensitive information they might contain is what worries me when I’m viewing a dashboard on a mobile device.

Let’s say I’m having dinner with friends at a restaurant. We split the check. To pay my fair share, I take out my phone and log into my banking app, and… the home screen displays my bank balance in big, bold numbers to the friends sitting right next to me, one of whom is the gossipiest of the bunch. There goes a bit of my pride.

That’s an extreme illustration because not all apps convey that level of sensitive information. But many do, and the care we put into protecting a user’s information from peeping eyeballs is only becoming more important as entire industries, like health care and education, lean more heavily into online experiences.

My advice: Hide sensitive information until prompted by the user to display it.

It’s generally a good practice to obscure sensitive information and have a liberal definition of what constitutes sensitive information.

Shortcuts Provided In The Login Flow

There’s a natural order to things, particularly when logging into a web app. We log in, see a welcome message, and are taken to a dashboard before we tap on some item that triggers some sort of action to perform. In other words, it takes a few steps and walking through a couple of doors to get to accomplish what we came to do.

What if we put actions earlier in the login flow? As in, they are displayed right along with the login form. This is what we call a shortcut.

Let’s take the same restaurant example from earlier, where I’m back at dinner and ready to pay. This time, however, I will open a different bank app. This one has shortcuts next to the login form, one of which is a “Scan to Pay” option. I tap it, log in, and am taken straight to the scanning feature that opens the camera on my device, completely bypassing any superfluous welcome messaging or dashboard. This spares the user both time and effort (and prevents sensitive information from leaking into view to boot).

Mobile operating systems also provide options for shortcuts when long-pressing an app’s icon on the home screen, which also can be used to an app’s advantage.

The Right Keyboard Configuration

All modern mobile operating systems are smart enough to tailor the virtual keyboard for specialized form inputs. A form field markup with type="email", for instance, triggers an onscreen keyboard that shows the “@" key in the primary view, preventing users from having to tap Shift to reveal it in a subsequent view. The same is true with URLs, where a “.com” key surfaces for inputs with type="url".

The right keyboard saves the effort of hunting down relevant special characters and numbers for specific fields. All we need to do is to use the right attributes and semantics for those fields, like, type=email, type=url, type=tel, and such.

<!-- Input Types for Virtual Keyboards -->
<input type="text">   <!-- default -->
<input type="tel">    <!-- numeric keyboard -->
<input type="number"> <!-- numeric keyboard -->
<input type="email">  <!-- displays @ key -->
<input type="url">    <!-- displays .com key -->
<input type="search"> <!-- displays search button -->
<input type="date">   <!-- displays date picker or wheel controls -->
Bigger Fonts With Higher Contrast

This may have been one of the first things that came to your mind when reading the article title. That’s because small text is prevalent in mobile interfaces. It’s not wrong to scale text in response to smaller screens, but where you set the lower end of the range may be too small for many users, even those with great vision.

The default size of body text is 16px on the web. That’s thanks to user agent styling that’s consistent across all browsers, including those on mobile platforms. But what exactly is the ideal size for mobile? The answer is not entirely clear. For example, Apple’s Human Interface Guidelines do not specify exact font sizes but rather focus on the use of Dynamic Text that adjusts the size of content to the user’s device-level preferences. Google’s Material Design guidelines are more concrete but are based on three scales: small, medium, and large. The following table shows the minimum font sizes for each scale based on the system’s typography tokens.

Scale Body Text (pt) Body Text (px)
Small 12pt 16px
Medium 14pt 18.66px
Large 16pt 21.33px

The real standard we ought to be looking at is the current WCAG 2.2, and here’s what it says on the topic:

“When using text without specifying the font size, the smallest font size used on major browsers for unspecified text would be a reasonable size to assume for the font.”

So, bottom line is that the bottom end of a font’s scale matches the web’s default 16px if we accept Android’s “Small” defaults. But even then, there are caveats because WCAG is more focused on contrast than size. Like, if the font in use is thin by default, WCAG suggests bumping up the font size to produce a higher contrast ratio between the text and the background it sits against.

There are many, many articles that can give you summaries of various typographical guidelines and how to adhere to them for optimal font sizing. The best advice I’ve seen, however, is Eric Bailey rallying us to “beat the “Reader Mode” button. Eric is talking more specifically about preventing clutter in an interface, but the same holds for font sizing. If the text is tiny or thin, I’m going to bash that button on your site with no hesitation.

Wrapping Up

Everything we’ve covered here in this article is personal irritations I feel when interacting with different mobile interfaces. I’m sure you have your own set of annoyances, and if you do, I’d love to read them in the comments if you’re willing to share. And someone else is likely to have even more examples.

The point is that we’re in some kind of “post-responsive” era of web design, one that looks beyond how elements stack in response to the viewport to consider user preferences, privacy, and providing optimal experiences at any breakpoint regardless of whatever device is used.

]]>
hello@smashingmagazine.com (Preethi Sam)
<![CDATA[Iconography In Design Systems: Easy Troubleshooting And Maintenance]]> https://smashingmagazine.com/2024/04/iconography-design-systems-troubleshooting-maintenance/ https://smashingmagazine.com/2024/04/iconography-design-systems-troubleshooting-maintenance/ Wed, 03 Apr 2024 08:00:00 GMT We all have an inherent tendency to like aesthetic and approachable things. That’s why any designer strives to deliver an intuitive and comprehensive design. And any designer knows it’s a pain in the neck, particularly when it comes to complex projects with lots of pages of design, components, and prototypes. As someone who has already traveled that road and learned quite a few valuable lessons, I have some wisdom to share with you.

Granted, tons of articles have been written about design systems, their complexity, and all the ways they can benefit a project. So, I’m not going to waste too many words on the matter. Instead, in this article, I want to dig deeper into iconography as part of a design system. Besides, I’ll share with you doable tips that will turn icon creation and maintenance into an enjoyable — or at least bearable — process. Shall we?

Design Systems: From Chaos To Order

Design Systems 101

Before we actually dive into the alluring and frightening world of iconography, I want to take some time to introduce you to the concept of a design system, as well as share my thoughts and rules that I follow while working with them.

If I were to call a design the way your interface speaks with your user, then — building on the metaphor — it would be fair to view a design system as a language.

Simply put, a design system is a functional set of defined technical standards, best user behavior practices, and navigational patterns that are used to build a pixel-perfect digital product.

It is a powerful designer tool that helps you make sure that you will end up with a coherent product and not a pathetic mess.

It seems that nowadays, designers are obliged to try their hand at creating or at least adopting a design system. So what exactly makes it an all-around beneficial thing for the designer lot? Let’s have a look:

  • The design system makes for the only source of truth since all the components are under one roof and are easily referable.
  • It hosts all the guidelines on how to implement existing components. And following the very same guidelines, designers can easily create new ones that match the former.
  • In the case of a two- (or more) designer team, a design system allows for visual consistency (which is crucial if your project is major and fast-evolving).
  • You can either use ready-made design components or alter them swiftly and in accordance with the guideline if any need arises.
  • You have access to a library of surefire design patterns, which greatly reduces the strain of coming up with new solutions.

That sounds like a treat, right? Still, creating a design system is generally viewed as an exceptionally time- and effort-consuming endeavor. If you do want to develop a design system, there is a way to make the process a bit easier. Enter the atomic design approach.

Atomic Design Approach

It’s been over six years since I first introduced the atomic approach into my workflow, and let me tell you, it was a game-changer for me as a designer. This methodology is a blessing if you work on a big project with a team of fellow designers.

If you know the pain of trying to track the changes in components throughout the projects, especially if these components are minor, then you’ll see why I’m so enthusiastic about the atomic approach. It allows for smooth and well-coordinated teamwork where every designer is aware of what component they are creating and how to make it consistent with the rest of the system.

The atomic design approach was pioneered by Brad Frost (a chemist of all occupations). It implies building your system brick-by-brick, starting with the smallest items and going all the way up while sustaining hierarchy. There are five stages to the process.

  • Atoms
    In a nutshell, these are basic HTML elements.

  • Molecules
    They are single-pattern components that do one thing.

  • Organisms
    They are composed of groups of molecules, or/and atoms, or/and other organisms.

  • Templates
    They provide a context for using molecules and organisms and focus on the page’s underlying content structure. In other words, templates are the guidelines.

  • Pages
    They show what a UI looks like with proper content.

What exactly makes this approach a thing designers gravitate towards? Here are my two cents on the matter:

  • Creating a design system resembles playing with a construction set. You begin with the smallest components and progress in size, which means you are eating the elephant a bite at a time and don’t get overwhelmed.
  • Altering a component in one place will cause updates wherever a certain atom, molecule, or organism is applied. This eliminates any need for manual tweaking of components.
  • This approach provides designers with design patterns, meaning that you no longer need to create new ones and worry about their consistency.

That’s clearly not all the advantages of this methodology, so if you are interested, go ahead and read more about it in Brad Frost’s book.

What I’m really willing to focus on is our job as designers in creating and maintaining those fabled design systems, both atomic and regular. More specifically, on iconography. And even more specifically, on the pitfalls we have a nasty habit of falling into when dealing with icons as the atoms of our systems. Off we go.

Iconography In Design Systems: Maladies and Remedies

Common Problems

Since I’m relying on my own experience when it comes to design systems, it would only be fair if I shared the biggest issues that I personally have with iconography in the context of design systems and how I solve them. I’ll share with you surefire tips on how to keep your iconography consistent and ensure its smooth integration into design environments.

If we regard a single icon from the atomic design standpoint, we would consider it an atom — the smallest but essential element, just like the color palette or typography. If we continue with our language analogy, I will take the liberty of calling icons a design’s vocabulary. So, it’s fairly clear that icons are the actual core of your design.

As any designer knows, users heavily rely on icons as an interactional element of an interface. Despite being the smallest of components, icons might prove to be a major pain in the neck in terms of creation. This is the lesson I have learned during my tenure as a UX designer.

Tip 1: Since an atom is not just an autonomous element, you have to think beforehand about how it will behave as part of a larger component, like a molecule, an organism, or a template.

These are the variables you have to keep in mind when developing an icon:

  • Is your icon scalable?
  • Does it have color variations?
  • Do you classify your icon according to meaning, group, style, or location?
  • Is there an option to change the icon’s meaning or style?
  • How can you easily introduce a new icon into an existing roster?
  • How should you navigate a situation when different designers develop icons separately?
  • How can you make locating a certain icon within your design system easier?

Here are some challenges that I personally face while developing iconography for a design system:

  • How should I keep track of icon updates and maintain their consistency?
  • How should I develop icon creation guidelines?
  • What should I do if current icons happen to be inconsistent?
  • How should I inform my design team of any changes?

It might be hard to wrap your head around so many questions, but worry not. I’ll try my best to cover all these issues as we go on.

Rules Of Thumb

An icon isn’t just a little pictogram with a certain meaning behind it. An icon is a symbol of action, an interactive element of a digital interface that helps users navigate through the system.

In other words, it is a tool, and the process of building a tool implies following rules. I found out firsthand that if you master the basic icon rules, then you’ll be able to build both stand-alone icons and those that are part of a larger environment with equal efficiency. Besides, you’ll enhance your ability to create icon sets and various icon types within a single project, all while maintaining their readability and accessibility.

Tip 2: Keep consistency by establishing the basic icon rules before building your icon library.

The following are the rules that I abide by:

Grid

I use the classic 24px grid for standard icons and a 44px grid for larger icons. Each grid consists of the padding area (marked in red, 2 px) and the live area (marked in blue, 20 px). The live area is the space that your icon content stays inside. Its shape depends on the icon’s body and could be circular, square, vertical-rectangular, or horizontal-rectangular.

Before you sit down to draw your icon, decide how much space your icon’s body will occupy in order to come up with the proper shape.

Size

Each icon within a design system has to have a primary size, which is the size that up to 90% of all icons share. I consider the 24px icon size suggested by Google’s Material Design to be the golden standard. This size works well both for desktop and mobile devices.

Still, there is room for exceptions in this rule when an icon needs to be smaller or larger. In this case, I employ a 4-pixel step rule. I increase or decrease the icon’s size by 4 pixels at a time (e.g., I go from 24 to 20, then 16, then 12 px, or 28, 32 px, and so on). I would still personally prefer the golden standard of 24 pixels since I find smaller sizes less readable or larger sizes too visually domineering.

Weight

Another key property to consider is the outline weight of your icon if you opt for this type. If you are building an icon library from scratch, it would be wise to test several outline weight values before you make a decision. This is especially crucial for icons that contain fine details.

Granted, you can assign different weight values to different types of icons, but you might struggle to write clear guidelines for your fellow designers. I usually make a conscious decision to go with a unified outline weight for all the icons, namely, 2 points.

Fill

A solid icon variant might considerably enhance the accessibility and readability of an interface icon. It’s really handy to have both solid and outline icon types. But not all your icons should have two options. If you choose to draw a solid option, determine what parts of your icon you want to make solid.

Design principles

As I’ve mentioned before, an icon is an essential interface element. This means that an icon should be simplistic, bold, and — what’s even more important in the context of design systems — created according to the unified rules.

I have a little trick I use to see how well a new icon fits the standard. I simply integrate the new icon into the interface populated by already existing elements. This helps determine if the new icon matches the rest.

Anatomy

Such aspects as corner, counterstroke, and stroke terminal provide the much-desired visual consistency. Obviously, all these elements should be unified for all the icons within a design system. A comprehensive guide to icon anatomy is available at Material Design.

Icon Consistency: Surefire Tips

Before I actually share my tips on how to deal with icons within a design system efficiently, here’s a little backstory to how I came up with them. It all started when I joined a project that already had an established host of icons. There were over a hundred of them. And the number grew because the project was a fast-evolving thing. So, the design system, like any other, was like a living being, constantly in a state of change.

The icon library was a mishmash of different icon types, creating quite a noise. The majority of icons differed in style, size, and application. Another problem I had was the fact that most of the icons did not have the source file. So, there was no way to quickly tweak an icon to match the rest.

The first and most important thing I did was to establish the basic rules for icon creation (that’s something we’ve already covered). This step was supposed to keep the design team from creating inconsistent icons.

Tip 3: Put all your icons on one layout. This way, you’ll get a full visual understanding of your icons and determine repetitive design patterns.

Now, here comes the juicy stuff. Here is my guide on how to sustain iconography in the context of a design system.

  • Divide your icons into subcategories.
    This rule works wonders when you have an array of inconsistent icons on your hands. There is no rule on what subcategories there should be. It all depends on your design system and the number of existing icons.
    In my case, I established three groups divided by size and icon style, which resulted in three subcategories: regular icons, detailed icons, and illustrations. Once you divide your icons in the same manner, it’ll be easier to apply the same rules to each group. Besides, this approach allows for a more structured storage of these icons within your design system.

  • Determine guidelines for each icon type.
    The next step is as wise as it is hard to pull off. You need to assign certain icon creation rules for each of the icon types (provided you have more than one). This is the basis upon which all your other attempts at achieving visual consistency will be built. To tame all the mismatched icons, I used the basic icon rules we’ve covered above. To keep track, I created a page in Figma for each of the icon types and used the basic size as the file name.

  • Group your icons wisely.
    When naming icons, I opt for the semantic section approach. Generally, you can divide all your icons into groups based on their meaning or application in the interface. Look at the example below; we have three distinct semantic sections: Transport, Services, and Warnings. Depending on their meaning, icons should be assigned to the corresponding sections. Then, those sections are, in turn, divided into subsections. For instance, the Transport section has Ground Transport and Air Transport. The main idea you should stick to is to keep your icons in separate sections.

  • Stick to clear names and descriptions.
    I have to admit that dividing icons into semantic sections does have a massive disadvantage: this division could be quite subjective. This is why it is crucial to add a detailed description to each of the icons. This will simplify icon search within a design system and will give a clear understanding of an icon’s application. This is how I create a description:
    • Tags: reference words that facilitate searching for an icon within the system.
    • Usage: a brief description of an icon’s application.
    • Group Name: the name of the group an icon belongs to. This helps with locating an icon right within the library.
    • Designs: an incredibly nifty tool that allows you to insert a link to the design and documentation that features the icon in question. This way, you’ll know the context in which the icon is applied.

  • Use color coding and symbols while updating icon design.
    This trick works best when you are not yet done with the icon library, but you need to communicate to your team which icons are ready to use and which still need a bit of enhancement. For instance, I mark the names of finished icons with a green symbol. An orange symbol marks those icons that need to be improved. And in case I need an icon deleted or drawn anew, I use a red cross.

  • Keep non-rasterised versions of icons.
    It can be handy to have a non-rasterised version of an icon at arm’s length. There’ve been cases when I was asked to create a similar icon or an icon that could use the same graphic forms as the existing ones. Should this happen again, I can simply take the original file and easily draw an icon. I store all the non-rasterised icons on a separate page in the file following the defined hierarchy.

  • Rasterise the icon vector.
    Be sure to apply the Outline Stroke feature before you create the icon component. This will allow for easy color change (more on this in the next tip) and scaling.

  • Mind the colors of your icons.
    I suggest keeping icons in the primary, most commonly used color by default. Another worthwhile thing to do is to name all icon colors according to their intended use and the interactions they perform. In order to do that, you need to equip your color library with a separate set of colors for all icon states, like primary, hover, and disabled. Make sure to name each set properly.

  • Assign a designer to maintain icons in the system.
    This is a seemingly trivial tip that, however, will save you trouble maintaining style and categorization consistency. I’ve personally had edge cases when the established rules failed. Having a designated designer who knew their way around the system helped to find a quick solution.

Real Example Of Guidelines Applied

To wrap up this whole lecture and actually see all these rules in action, take a look at the following template file.

Final Thoughts: Is It Worth It?

No matter how sick you might be dealing with unending visual inconsistency, design systems are still challenging. They can scare any designer regardless of their experience. Still, if you want to bring order to chaos, introducing a design system into your workflow is worth the trouble, especially when it comes to maintaining iconography.

After all, iconography is the most volatile part of a design system in terms of visual variety. That’s why iconography was the biggest challenge I had to face in my tenure as a designer. And that’s exactly why I am genuinely proud that I’ve tamed that beast and can now share my hacks with you.

Resources

Public design systems:

Design systems resources:

Icons resources:

]]>
hello@smashingmagazine.com (Tatsiana Tarkan)
<![CDATA[Infinite-Scrolling Logos In Flat HTML And Pure CSS]]> https://smashingmagazine.com/2024/04/infinite-scrolling-logos-html-css/ https://smashingmagazine.com/2024/04/infinite-scrolling-logos-html-css/ Tue, 02 Apr 2024 12:00:00 GMT ` element? It’s deprecated, so it’s not like you’re going to use it when you need some sort of horizontal auto-scrolling feature. That’s where CSS comes in because it has all the tools we need to pull it off. Silvestar Bistrović demonstrates a technique that makes it possible with a set of images and as little HTML as possible.]]> When I was asked to make an auto-scrolling logo farm, I had to ask myself: “You mean, like a <marquee>?” It’s not the weirdest request, but the thought of a <marquee> conjures up the “old” web days when Geocities ruled. What was next, a repeating sparkling unicorn GIF background?

If you’re tempted to reach for the <marquee> element, don’t. MDN has a stern warning about it right at the top of the page:

Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible […] Be aware that this feature may cease to work at any time.”

That’s fine because whatever infinite scrolling feature <marquee> is offered, we can most certainly pull off in CSS. But when I researched examples to help guide me, I was surprised to find very little on it. Maybe auto-scrolling elements aren’t the rage these days. Perhaps the sheer nature of auto-scrolling behavior is enough of an accessibility red flag to scare us off.

Whatever the case, we have the tools to do this, and I wanted to share how I went about it. This is one of those things that can be done in lots of different ways, leveraging lots of different CSS features. Even though I am not going to exhaustively explore all of them, I think it’s neat to see someone else’s thought process, and that’s what you’re going to get from me in this article.

What We’re Making

But first, here's an example of the finished result:

See the Pen CSS only marquee without HTML duplication [forked] by Silvestar Bistrović.

The idea is fairly straightforward. We want some sort of container, and in it, we want a series of images that infinitely scroll without end. In other words, as the last image slides in, we want the first image in the series to directly follow it in an infinite loop.

So, here’s the plan: We’ll set up the HTML first, then pick at the container and make sure the images are correctly positioned in it before we move on to writing the CSS animation that pulls it all together.

Existing Examples

Like I mentioned, I tried searching for some ideas. While I didn’t find exactly what I was looking for, I did find a few demos that provided a spark of inspiration. What I really wanted was to use CSS only while not having to “clone” the marquee items.

Geoff Graham’s “Sliding Background Effect” is close to what I wanted. While it is dated, it did help me see how I could intentionally use overflow to allow images to “slide” out of the container and an animation that loops forever. It’s a background image, though, and relies on super-specific numeric values that make it tough to repurpose in other projects.

See the Pen Untitled [forked] by @css-tricks.

There’s another great example from Coding Journey over at CodePen:

See the Pen Marquee-like Content Scrolling [forked] by Coding Journey.

The effect is what I’m after for sure, but it uses some JavaScript, and even though it’s just a light sprinkle, I would prefer to leave JavaScript out of the mix.

Ryan Mulligan’s “CSS Marquee Logo Wall” is the closest thing. Not only is it a logo farm with individual images, but it demonstrates how CSS masking can be used to hide the images as they slide in and out of the container. I was able to integrate that same idea into my work.

See the Pen CSS Marquee Logo Wall [forked] by Ryan Mulligan.

But there’s still something else I’m after. What I would like is the smallest amount of HTML possible, namely markup that does not need to be duplicated to create the impression that there’s an unending number of images. In other words, we should be able to create an infinite-scrolling series of images where the images are the only child elements in the “marquee” container.

I did find a few more examples in other places, but these were enough to point me in the right direction. Follow along with me.

The HTML

Let's set up the HTML structure first before anything else. Again, I want this to be as “simple” as possible, meaning very few elements with the shortest family tree possible. We can get by with nothing but the “marquee” container and the logo images in it.

<figure class="marquee">
  <img class="marquee__item" src="logo-1.png" width="100" height="100" alt="Company 1">
  <img class="marquee__item" src="logo-2.png" width="100" height="100" alt="Company 2">
  <img class="marquee__item" src="logo-3.png" width="100" height="100" alt="Company 3">
</figure>

This keeps things as “flat” as possible. There shouldn’t be anything else we need in here to make things work.

Setting Up The Container

Flexbox might be the simplest approach for establishing a row of images with a gap between them. We don’t even need to tell it to flow in a row direction because that’s the default.

.marquee {
  display: flex;
}

I already know that I plan on using absolute positioning on the image elements, so it makes sense to set relative positioning on the container to, you know, contain them. And since the images are in an absolute position, they have no reserved height or width dimensions that influence the size of the container. So, we’ll have to declare an explicit block-size (the logical equivalent to height). We also need a maximum width so we have a boundary for the images to slide in and out of view, so we’ll use max-inline-size (the logical equivalent to max-width):

.marquee {
  --marquee-max-width: 90vw;

  display: flex;
  block-size: var(--marquee-item-height);
  max-inline-size: var(--marquee-max-width);
  position: relative;
}

Notice I’m using a couple of CSS variables in there: one that defines the marquee’s height based on the height of one of the images (--marquee-item-height) and one that defines the marquee’s maximum width (--marquee-max-width). We can give the marquee’s maximum width a value now, but we’ll need to formally register and assign a value to the image height, which we will do in a bit. I just like knowing what variables I am planning to work with as I go.

Next up, we want the images to be hidden when they are outside of the container. We’ll set the horizontal overflow accordingly:

.marquee {
  --marquee-max-width: 90vw;

  display: flex;
  block-size: var(--marquee-item-height);
  max-inline-size: var(--marquee-max-width);
  overflow-x: hidden;
  position: relative;
}

And I really like the way Ryan Mulligan used a CSS mask. It creates the impression that images are fading in and out of view. So, let’s add that to the mix:

.marquee {
  display: flex;
  block-size: var(--marquee-item-height);
  max-inline-size: var(--marquee-max-width);
  overflow-x: hidden;
  position: relative;
  mask-image: linear-gradient(
    to right,
    hsl(0 0% 0% / 0),
    hsl(0 0% 0% / 1) 20%,
    hsl(0 0% 0% / 1) 80%,
    hsl(0 0% 0% / 0)
  );
  position: relative;
}

Here’s what we have so far:

See the Pen CSS only marquee without HTML duplication, example 0 [forked] by Silvestar Bistrović.

Positioning The Marquee Items

Absolute positioning is what allows us to yank the images out of the document flow and manually position them so we can start there.

.marquee__item {
  position: absolute;
}

That makes it look like the images are completely gone. But they’re there — the images are stacked directly on top of one another.

Remember that CSS variable for our container, --marquee-item-height? Now, we can use it to match the marquee item height:

.marquee__item {
  position: absolute;
  inset-inline-start: var(--marquee-item-offset);
}

To push marquee images outside the container, we need to define a --marquee-item-offset, but that calculation is not trivial, so we will learn how to do it in the next section. We know what the animation needs to be: something that moves linearly for a certain duration after an initial delay, then goes on infinitely. Let’s plug that in with some variables as temporary placeholders.

.marquee__item {
  position: absolute;
  inset-inline-start: var(--marquee-item-offset);
  animation: go linear var(--marquee-duration) var(--marquee-delay, 0s) infinite;
}

To animate the marquee items infinitely, we have to define two CSS variables, one for the duration (--marquee-duration) and one for the delay (--marquee-delay). The duration can be any length you want, but the delay should be calculated, which is what we will figure out in the next section.

.marquee__item {
  position: absolute;
  inset-inline-start: var(--marquee-item-offset);
  animation: go linear var(--marquee-duration) var(--marquee-delay, 0s) infinite;
  transform: translateX(-50%);
}

Finally, we will translate the marquee item by -50% horizontally. This small “hack” handles situations when the image sizes are uneven.

See the Pen CSS only marquee without HTML duplication, example 2 [forked] by Silvestar Bistrović.

Animating The Images

To make the animation work, we need the following information:

  • Width of the logos,
  • Height of the logos,
  • Number of items, and
  • Duration of the animation.

Let’s use the following configurations for our set of variables:

.marquee--8 {
  --marquee-item-width: 100px;
  --marquee-item-height: 100px;
  --marquee-duration: 36s;
  --marquee-items: 8;
}

Note: I’m using the BEM modifier .marquee--8 to define the animation of the eight logos. We can define the animation keyframes now that we know the --marquee-item-width value.

@keyframes go {
  to {
    inset-inline-start: calc(var(--marquee-item-width) * -1);
  }
}

The animation moves the marquee item from right to left, allowing each one to enter into view from the right as it travels out of view over on the left edge and outside of the marquee container.

Now, we need to define the --marquee-item-offset. We want to push the marquee item all the way to the right side of the marquee container, opposite of the animation end state.

You might think the offset should be 100% + var(--marquee-item-width), but that would make the logos overlap on smaller screens. To prevent that, we need to know the minimum width of all logos combined. We do that in the following way:

calc(var(--marquee-item-width) * var(--marquee-items))

But that is not enough. If the marquee container is too big, the logos would take less than the maximum space, and the offset would be within the container, which makes the logos visible inside the marquee container. To prevent that, we will use the max() function like the following:

--marquee-item-offset: max(
  calc(var(--marquee-item-width) * var(--marquee-items)),
  calc(100% + var(--marquee-item-width))
);

The max() function checks which of the two values in its arguments is bigger, the overall width of all logos or the maximum width of the container plus the single logo width, which we defined earlier. The latter will be true on bigger screens and the former on smaller screens.

See the Pen CSS only marquee without HTML duplication, example 3 [forked] by Silvestar Bistrović.

Finally, we will define the complicated animation delay (--marquee-delay) with this formula:

--marquee-delay: calc(var(--marquee-duration) / var(--marquee-items) * (var(--marquee-items) - var(--marquee-item-index)) * -1);

The delay equals the animation duration divided by a quadratic polynomial (that’s what ChatGPT tells me, at least). The quadratic polynomial is the following part, where we multiply the number of items and number of items minus the current item index:

var(--marquee-items) * (var(--marquee-items) - var(--marquee-item-index))

Note that we are using a negative delay (* -1) to make the animation start in the “past,” so to speak. The only remaining variable to define is the --marquee-item-index (the current marquee item position):

.marquee--8 .marquee__item:nth-of-type(1) {
  --marquee-item-index: 1;
}
.marquee--8 .marquee__item:nth-of-type(2) {
  --marquee-item-index: 2;
}

/* etc. */

.marquee--8 .marquee__item:nth-of-type(8) {
  --marquee-item-index: 8;
}

Here’s that final demo once again:

See the Pen CSS only marquee without HTML duplication [forked] by Silvestar Bistrović.

Improvements

This solution could be better, especially when the logos are not equal widths. To adjust the gaps between inconsistently sized images, we could calculate the delay of the animation more precisely. That is possible because the animation is linear. I’ve tried to find a formula, but I think it needs more fine-tuning, as you can see:

See the Pen CSS only marquee without HTML duplication, example 4 [forked] by Silvestar Bistrović.

Another improvement we can get with a bit of fine-tuning is to prevent big gaps on wide screens. To do that, set the max-inline-size and declare margin-inline: auto on the .marquee container:

See the Pen CSS only marquee without HTML duplication, example 5 [forked] by Silvestar Bistrović.

Conclusion

What do you think? Is this something you can see yourself using on a project? Would you approach it differently? I am always happy when I land on something with a clean HTML structure and a pure CSS solution. You can see the final implementation on the Heyflow website.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Silvestar Bistrović)
<![CDATA[Colorful Blossoms And Rainy Days (April 2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/03/desktop-wallpaper-calendars-april-2024/ https://smashingmagazine.com/2024/03/desktop-wallpaper-calendars-april-2024/ Sun, 31 Mar 2024 08:00:00 GMT New month, new wallpapers! To cater for a fresh dose of colorful inspiration on a regular basis, we embarked on our monthly wallpapers journey more than 13 years ago. It’s the perfect occasion for creatives to put their artistic skills to the test, indulge in a little project just for fun, or tell the story they’ve always wanted to tell. Of course, it wasn’t any different this time around.

In this post, you’ll find beautiful, unique, and inspiring wallpapers for April 2024. Created with love by artists and designers from across the globe, they come in versions with and without a calendar and can be downloaded for free. As a little bonus goodie, we also added some April favorites from our wallpapers archives to the collection — maybe you’ll rediscover one of your almost-forgotten favorites in here, too? A big thank you to everyone who shared their designs with us this month! Happy April!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

Oceanic Wonders

“Celebrate National Dolphin Day on April 14th by acknowledging the captivating beauty and importance of dolphins in our oceans!” — Designed by PopArt Studio from Serbia.

Don’t Give Up

Designed by Ricardo Gimenes from Sweden.

Do Doodling

Designed by Design Studio from India.

Dung Beetle

Designed by Ricardo Gimenes from Sweden.

Live Life To The Fullest

“Live life to the fullest by daring to dream big, seizing every moment with purpose, and embracing challenges as opportunities for growth. Cultivate meaningful relationships, savoring shared experiences and creating lasting memories. Find joy in the ordinary, appreciate the beauty of the world around you, and always strive to make a positive impact, leaving a legacy of fulfillment and happiness.” — Designed by Hitesh Puri from Delhi, India.

Sow Smiles, Reap Joy

“Through this design, I aim to capture the essence of Baisakhi, a celebration of gratitude, unity, and the bountiful blessings of nature. In creating my Baisakhi artwork, I was inspired by the timeless imagery of wheat fields and dancing figures. The golden grains symbolize abundance and prosperity, while the lively dancers represent the joyous spirit of the harvest season.” — Designed by Cronix from the United States.

Warmest Eid Greetings

“Inspired by the timeless tradition of giving and sharing during Eid, my wallpaper design depicts the happiness of being together with family, the soft light of the Eid moon, and the cheerful feeling of celebration.” — Designed by Cronix from the United States.

Swing Into Spring

“Our April calendar needs not mark any special occasion — April itself is a reason to celebrate. It was a breeze creating this minimal, pastel-colored calendar design with a custom lettering font and plant pattern, for the ultimate spring feel.” — Designed by PopArt Studio from Serbia.

Spring Awakens

“Despite the threat that has befallen us all, we all look forward to the awakening of a life that spreads its wings after every dormant winter and opens its petals to greet us. Long live spring, long live life.” — Designed by LibraFire from Serbia.

The Loneliest House In The World

“March 26 was Solitude Day. To celebrate it, here is the picture about the loneliest house in the world. It is a real house, I found it on Youtube.” — Designed by Vlad Gerasimov from Georgia.

Dreaming

“The moment when you just walk and your imagination fills up your mind with thoughts.” — Designed by Gal Shir from Israel.

Coffee Morning

Designed by Ricardo Gimenes from Sweden.

Clover Field

Designed by Nathalie Ouederni from France.

Rainy Day

Designed by Xenia Latii from Berlin, Germany.

The Perpetual Circle

“Inspired by the Black Forest, which is beginning right behind our office windows, so we can watch the perpetual circle of nature, when we take a look outside.” — Designed by Nils Kunath from Germany.

Inspiring Blossom

“‘Sweet spring is your time is my time is our time for springtime is lovetime and viva sweet love,’ wrote E. E. Cummings. And we have a question for you. Is there anything more refreshing, reviving, and recharging than nature in blossom? Let it inspire us all to rise up, hold our heads high, and show the world what we are made of.” — Designed by PopArt Studio from Serbia.

A Time For Reflection

“‘We’re all equal before a wave.’ (Laird Hamilton)” — Designed by Shawna Armstrong from the United States.

Purple Rain

“This month is International Guitar Month! Time to get out your guitar and play. As a graphic designer/illustrator seeing all the variations of guitar shapes begs to be used for a fun design. Search the guitar shapes represented and see if you see one similar to yours, or see if you can identify some of the different styles that some famous guitarists have played (BTW, Prince’s guitar is in there and purple is just a cool color).” — Designed by Karen Frolo from the United States.

Sakura

“Spring is finally here with its sweet Sakura’s flowers, which remind me of my trip to Japan.” — Designed by Laurence Vagner from France.

Wildest Dreams

“We love the art direction, story, and overall cinematography of the ‘Wildest Dreams’ music video by Taylor Swift. It inspired us to create this illustration. Hope it will look good on your desktops.” — Designed by Kasra Design from Malaysia.

Good Day

“Some pretty flowers and spring time always make for a good day.” — Designed by Amalia Van Bloom from the United States.

Springing

“It’s spring already, my favorite season! You can smell it, you can see it, you can feel it in the air. Trees blossom, the grass is smiling at the sun, everything is so eager to show itself.” — Designed by Vane Kosturanov from Macedonia.

I “Love” My Dog

Designed by Ricardo Gimenes from Sweden.

April Showers

“April Showers bring hedgehogs under flowers!” — Designed by Danaé Drews from the United States.

Fresh Kicks

“Spring time means the nicer weather is rolling in, so that means the nice shoes roll out as well.” — Designed by Alex Shields from the United States.

April Fox

Designed by MasterBundles from the United States.

Flying On A Rainy Day

“April is the month of spring or autumn depending where you live on the globe! It’s also the second rainiest month of the year. I was inspired by one simple motif to illustrate rain, birds, and flowers. So whether you witness rainy days or colorful ones… enjoy April!” — Designed by Rana Kadry from Egypt.

Fairytale

“A tribute to Hans Christian Andersen. Happy Birthday!” — Designed by Roxi Nastase from Romania.

Spring Serenity

“My inspiration was the arrival of spring that transmits a sense of calmness and happiness through its beautiful colors.” — Designed by Margarida Granchinho from Portugal.

Ice Scream

Designed by Ricardo Gimenes from Sweden.

Puddle Splash

“I designed this playful and fun wallpaper inspired by nature that is present during the early spring.” — Designed by Marla Gambucci from the United States.

Take Our Daughters And Sons To Work Day

“The day revolves around parents taking their children to work to expose them to future job possibilities and the value of education. It’s been an annual event since 1992 and helps expose children to the possibilities of careers at a young age.” — Designed by Ever Increasing Circles from the United Kingdom.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[How Developers Can Strengthen Their Mental Health Amidst High-Pressure Projects]]> https://smashingmagazine.com/2024/03/developers-strengthen-mental-health/ https://smashingmagazine.com/2024/03/developers-strengthen-mental-health/ Fri, 29 Mar 2024 08:00:00 GMT I have had my fair share of projects that have given me life because of what I accomplished, as well as those that have cost me life when I reflect on the terrible stress they caused. I know I’m not unique that way; sometimes, my work makes me feel like a rock star and other times, I question whether I should be a developer at all. Some projects test you — like really test you.

In the first week of December 2023, I got a contract to rebuild an entire web app from the ground-up using a new technology designed to be launched alongside a “new year, new system” initiative heading into 2024.

I think you know where this is going. I built up a lot of confidence heading into the project but soon found that I had bitten off way more than I could chew. The legacy code I inherited was the epitome of “legacy” code, and its spaghetti nature needed more than one developer to suss out. The project looked doomed from the beginning, and I hadn’t even written a line of code!

I quit the job. After weeks of stress-laden sleep, I simply couldn’t stomach the work. I actually dreaded work altogether. And with that dread came doubts about my career and whether I should start looking outside the industry.

Is this starting to sound familiar?

That job wasn’t just a project that posed a personal challenge; no, it was a battle for my mental health. I was officially burned out. Thankfully, I was relieved of some pressure when, to my surprise, the client was weirdly understanding and offered to bring in an additional developer to share the load. That really helped, and it gave me what I needed to roll my sleeves back up and finish the job.

Is This Success?

The project launched, and the client was happy with it. But I still experience aftershocks, even today, where the trauma from that contract seeps back in and reminds me just how awful it was and the extent to which it made me question my entire career.

So, even though the project was ultimately a success, I wouldn’t say it was “successful.” There was a real non-monetary cost that I paid just for taking the job.

I’m sure it is the same for you. We’ve all had stressful projects that push us to the brink of what feels like self-destruction. It’s clear because there are so many other articles and blog posts about it, all offering insightful personal advice for relieving stress, like exercise, sleep, and eating right.

In fact, as I reflected back on projects that predated this one particular nightmare, I realized there had been other projects I’d taken that had likely contributed to the burnout. Interestingly, I found a few common threads between them that I now use as “warning flags” going into new work.

All of our experiences are unique to us, and there is no standard recipe for managing stress and protecting your mental health. Advice in this area is always best described as “your mileage may vary” for no other reason than that it is scoped to a specific individual. True, one’s experiences can go so far as to help someone through a tough situation. I find it’s the same thing with self-help books — the best advice is usually the same advice found elsewhere, only articulated better or in a way that resonates with you.

Think of this article as more of my personal story of experiences safeguarding my mental health when finding myself in super specific work situations.

The Urgent Hotfix

Remember that project with the “comfortable” deadline? Yeah, me neither. It’s that common thing where you ask when the project needs to be completed, and you get back a sarcastic “last Tuesday.”

In this particular instance, it was a usual Monday morning. There I was, still in bed, happily rested after a fulfilling weekend. Then Slack started blasting me with notifications, all of which were in the vein of,

“Hey, users can’t make payments on the app — urgent!”

You can fault me for having Slack notifications enabled early on a Monday. But still, it killed my good mood and practically erased whatever respite I gained from the weekend. But I got up, headed over to the laptop, and began working as quickly as the day had started.

The timeline for this sort of fix is most definitely a “due last Tuesday” situation. It’s urgent and demands immediate attention at the expense of dropping everything else. There’s nothing easygoing about it. The pressure is on. As we were all trying to fix the bug, the customer support team also added to the pressure by frequently reporting the rising number of users having difficulties processing payments.

We read through this huge codebase and ran different kinds of tests, but nothing worked. I think it was around 40 minutes before the deadline that a colleague came across a Reddit post (dated six years ago or so) that had the solution in it. I tell you, that bug stood no chance. We finally got the payment system up and running. I was relieved, but at what cost?

What I Learned About HotFixes

Urgent hotfixes are a reality for most developers I know. They sort of come with the territory. But allowing them to take away your well-earned peace of mind is all too easy. A day can go from peaceful to panicking with just one Slack notification, and it may happen at any time, even first thing on a Monday morning.

What I’d Do Differently

It’s funny how Slack is called “Slack” because it really does feel like “slacking off” when you’re not checking in. But I can tell you that my Slack notifications are now paused until more reasonable hours.

Yes, it was a very real and very urgent situation, but allowing it to pull me completely out of my personal time wasn’t the best choice. I am not the only person on the team, so someone else who is already readily available can take the call.

After all, a rested developer is a productive developer, especially when faced with an urgent situation.

The Pit Of Procrastination

I once got myself into a contract for a project that was way above my skill set. But what’s that thing developers love saying, “Fake it ’til you make it,” or something like that? It’s hard to say “no” to something, particularly if your living depends on winning project bids. Plus, I won’t lie: there’s a little pride in not wanting to admit defeat.

When I accepted the job, I convinced myself that all I needed was two full days of steady focus and dedication to get up to speed and knock things out. But guess what? I procrastinated.

It actually started out very innocently. I’d give myself a brain break and read for 30 minutes, then maybe scroll through socials, then switch to YouTube, followed by… you get the picture. By the time I realize what happened, I’m several hours off schedule and find stress starting to harbor and swell inside me.

Those half hours here and there took me right up to the eleventh hour.

Unfortunately, I lost the contract as I couldn’t hit my promised timeline. I take full responsibility for that, of course, but I want to be honest and illustrate the real consequences that happen when stress and fear take over. I let myself get distracted because I was essentially afraid of the project and wasn’t being honest with myself.

What I Learned About Procrastination

The “fake it ’til you make it” ethos is a farce. There are relatively “safe” situations where getting into unfamiliar territory outside your skillset is going to be just fine. However, a new client with a new project spending new money on my expertise is not one of them.

Saying “yes” to a project is a promise, not a gamble.

And I’m no longer gambling with my client’s projects.

What I’d Do Differently

Learning on the job without a solid plan is a bad idea. If a project screams “out of my league,” I’ll politely decline. In fact, I have found that referring a client to another developer with the right skill set is actually a benefit because the client appreciates the honesty and convenience of not having to find another lead. I actually get more work when I push away the work I’m least suited for.

The Unrealistic Request

This happened recently at a startup I volunteered for and is actually quite funny in hindsight. Slack chimed in with a direct message from a marketing lead on the team:

“Hi, we are gonna need to add an urgent feature for a current social media trend. Can you implement it ASAP?”

It was a great feature! I dare say I was even eager to work on it because I saw its potential for attracting new users to the platform. Just one problem: what exactly does “ASAP” mean in this instance? Yes, I know it’s “as soon as possible,” but what is the actual deadline, and what’s driving it? Are we talking one day? One week? One month? Again, startups are famous for wanting everything done two weeks ago.

But I didn’t ask those questions. I dropped everything I was doing and completed the feature in two weeks’ time. If I’m being honest, there was also an underlying fear of saying “no” to the request. I didn’t want to disappoint someone on my team.

That’s the funny part. “ASAP” was really code for “as soon as possible with your current workload.” Was that communicated well? Definitely not. Slack isn’t exactly the best medium for detailed planning. I had a lot more time than I thought, yet I let myself get swept up by the moment. Sure, I nailed the new feature, and it did indeed attract new users — but again, at what cost? I patted myself on the back for a job well done but then swiveled my chair around to realize that I was facing a pile of work that I let mount up in the meantime.

And thus, the familiar weight of stress began taking its typical toll.

What I Learned About Unrealistic Requests

Everything has a priority. Someone else may have a pressing deadline, but does it supersede your own priorities? Managing priorities is more of a juggling act, but I was treating them as optional tasks that I could start and stop at will.

What I’d Do Differently

There are two things I’d do differently next time an unrealistic request comes up:

  • First, I’ll be sure to get a firm idea of when the request is actually needed and compare it to my existing priorities before agreeing to it.
  • Second, I plan on saying “no” without actually saying it. How different would the situation have been had I simply replied, “Yes, if...” instead, as in, “Yes, if I can complete this thing I’m working on first, then I’d be happy to jump on that next.” That puts the onus on the requester to do a little project management rather than allowing myself to take on the burden carte blanche.
The 48-Hour Workday

How many times have you pulled an all-nighter to get something done? If the answer is zero, that’s awesome. In my experience, though, it’s come up more times than I can count on two hands. Sometimes it’s completely my doing; I’ll get sucked into a personal side project or an interesting bug that leads to hours passing by like minutes.

I have more than a few friends and acquaintances who wear sleepless nights like merit badges as if accumulating them is somehow a desirable thing.

The most recent example for me was a project building a game. It was supposed to be pretty simple: You’re a white ball chasing red balls that are flying around the screen. That might not be the most exciting thing in the world, but it was introducing me to some new coding concepts, and I started riding a wave I didn’t want to leave. In my head, this little game could be the next Candy Crush, and there was no way I’d risk losing success by quitting at 2:00 a.m. No way.

To this day, the game is sitting dormant and collecting digital dust in a GitHub repository, unfinished and unreleased. I’m not convinced the five-day marathon was worth it. If anything, it’s like I had spent my enthusiasm for the job all at once, and when it burned me out, I needed a marathon stretch of sleep to get back to reality.

What I Learned About All-Nighters

The romanticized image of a fast-typing developer sporting a black hoodie in a dark room of servers and screens only exists in movies and is not something to emulate. There’s a reason there are 24 hours in a day instead of 48 — we need breaks and rest, if for nothing else, to be better at what we do. Mimicking a fictional stereotype is not the path to becoming a good developer, nor is it the path to sustainable living conditions.

What I’d Do Differently

I’m definitely more protective of the boundaries between me and my work. There’s a time to work, just as there’s a time for resting, personal needs, and even a time for playing.

That means I have clearly defined working hours and respect them. Naturally, there are days I need to be adaptable, but having the boundaries in place makes those days the exception as opposed to the rule.

I also identify milestones in my work that serve as natural pauses to break things up into more manageable pieces. If I find myself coding past my regular working hours, especially on consecutive days, then that’s an indication that I am taking on too much, that I am going outside of scope, or that the scope hasn’t been defined at all and needs more definition.

Bugged By A Bug

There are no escaping bugs. As developers, we’re going to make mistakes and clean them up as we go. I won’t say I enjoy bugfixes as much as developing new features, but there is some little part of me at the same time that’s like, “Oh yeah, challenge accepted!” Bugs can often be approached as mini puzzles, but that’s not such a bad thing.

But there are those bugs that never seem to die. You know, the kind you can’t let go of? You’re absolutely sure that you’ve done everything correctly, and yet, the bug persists. It nearly gets to the point where you might be tempted to blame the bug on the browser or whatever dependency you’re working with, but you know it’s not. It sticks with you at night as you go to bed.

Then comes the epiphany: Oh crap, it’s a missing X. And X is usually a missing semicolon or anything else that’s the equivalent of unplugging the thing and plugging it back in only to find things are working perfectly.

I have lots of stories like this. This one time, however, takes the cake. I was working on this mobile app with React Native and Expo. Everything was going smoothly, and I was in the zone! Then, a rendering error cropped up for no clear reason. My code compiled, and all the tests passed, but the app refused to render on my mobile device.

So, like any logical developer, I CTRL + Z’d my way back in time until I reached a point where I was sure that the app rendered as expected. I still got the same rendering error.

That was when I knew this bug was out for my blood. I tried every trick I knew in the book to squash that thing, but it simply would not go away. I was removing and installing packages like a madman, updating dependencies, restarting VS Code, pouring through documentation, and rebooting my laptop. Still nothing.

For context: Developers typically use Expo on their devices to render the apps in real-time when working with React Native and Expo. I was not, and therein lies the problem. My phone had decided to ditch the same Wi-Fi network that my laptop was connected to.

All I had to do was reconnect my phone to the network. Problem solved. But agony in the process.

What I Learned About Bugfixes

Not every code bug has a code solution. Even though I had produced perfectly valid scripts, I doubted my work and tackled the issue with what’s natural to me: code.

If I had stepped back from my work for even a moment, then I probably would have seen the issue and saved myself many hours and headaches. I let my frustration take over to the extent that the bug was no longer a mini puzzle but the bane of my existence. I really needed to read my temperature level and know when to take a break.

Bugs sometimes make me doubt my credibility as a developer, especially when the solution is both simple and right under my nose the entire time — like network connectivity.

What I’d Do Differently

There’s an old Yiddish saying: To a worm in horseradish, the world is horseradish. You may recognize it as the leading quote in Malcolm Gladwell’s What the Dog Saw and Other Adventures. It’s closely related to other common sayings along the lines of, “To a hammer, everything is a nail.”

In addition to trying to look at bugs from a non-horseradish perspective, I now know to watch my frustration level when things start feeling helpless. Take breaks. Take a walk. Eat lunch. Anything to break the cycle of rumination. It’s often in that moment of clarity that the puzzle finally starts to come together.

The Meeting-Working Imbalance

I don’t like meetings, and I’m sure many developers would agree with me on that. They’re somewhat of a necessary evil right? There’s value, for example, in the weekly standups for checking in on the team’s progress and staying on the same page as far as what’s coming up in the following week of planning.

If only that was the one single meeting I had to attend on a given day.

Let me describe one particular day that I feel is emblematic of what I think is a common conflict between time spent in meetings and time spent working. I got to my workspace and was ready for the usual half-hour weekly team check-in. It went a little over, which was fine, but it did mean I had to rush to the next meeting instead of having a little buffer between the two. That meeting was a classic one, the type where everyone wants a developer in the room in case something technical comes up but never does, leaving me bored and dividing my attention with my actual work.

We had five meetings that day. In my book, that’s a full day completely wasted because I was unable to get around to writing any code at all, save for a few lines I could squeeze in here and there. That’s no way to work, but is unfortunately a common pattern.

What I Learned About Meetings

Meetings have to happen. I get that. But I’ve learned that not every meeting is one that I personally need to attend. In many cases, I can get the gist of what happened in a meeting by watching the recording or reading the project manager’s notes. I now know that meetings can “happen” in lots of ways, and what comes from them can still be learned asynchronously in many instances.

What I’d Do Differently

From here on out, I am asking (politely, of course) whether my attendance is mandatory or not when certain meetings come up. I also ask if I can either prepare something for the group in advance or get caught up to speed after the meeting has happened.

Conclusion

That’s it! These are a handful of situations I have found myself in the past couple of years. It’s funny how seemingly small events are able to coalesce and reveal patterns of behavior. There’s a common thread of stubbornness running through them that has opened my eyes to the way I work and how I manage my mental health.

I’m sure it is the same for you. What times can you remember when stress, anxiety, and frustration consumed you? Are you able to write them down? Do you see a pattern forming? I believe doing this sort of mental inventory is valuable because you start to see specific things that trigger your feelings, and with that, it’s possible to recognize and avoid them in the future.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Victor Ayomipo)
<![CDATA[The Future Of User Research: Expert Insights And Key Trends]]> https://smashingmagazine.com/2024/03/the-future-of-user-research/ https://smashingmagazine.com/2024/03/the-future-of-user-research/ Wed, 27 Mar 2024 13:00:00 GMT This article is a sponsored by Maze

How do product teams conduct user research today? How do they leverage user insights to make confident decisions and drive business growth? And what role does AI play? To learn more about the current state of user research and uncover the trends that will shape the user research landscape in 2024 and beyond, Maze surveyed over 1,200 product professionals between December 2023 and January 2024.

The Future of User Research Report summarized the data into three key trends that provide precious insights into an industry undergoing significant changes. Let’s take a closer look at the main findings from the report.

Trend 1: The Demand For User Research Is Growing

62% of respondents who took the Future of User Research survey said the demand for user research has increased in the past 12 months. Industry trends like continuous product discovery and research democratization could be contributing to this growth, along with recent layoffs and reorganizations in the tech industry.

Emma Craig, Head of UX Research at Miro, sees one reason for this increase in the uncertain times we’re living in. Under pressure to beat the competition, she sensed a “shift towards more risk-averse attitudes, where organizations feel they need to ‘get it right’ the first time.” By conducting user research, organizations can mitigate risk and clarify the strategy of their business or product.

Research Is About Learning

As the Future of User Research report found out, organizations are leveraging research to make decisions across the entire product development lifecycle. The main consumers of research are design (86%) and product (83%) teams, but it’s also marketing, executive teams, engineering, data, customer support, and sales who rely on the results from user research to inform their decision-making.

As Roberta Dombrowski, Research Partner at Maze, points out:

“At its core, research is about learning. We learn to ensure that we’re building products and services that meet the needs of our customers. The more we invest in growing our research practices and team, the higher our likelihood of meeting these needs.”

Benefits And Challenges Of Conducting User Research

As it turns out, the effort of conducting user research on a regular basis pays off. 85% of respondents said that user research improved their product’s usability, 58% saw an increase in customer satisfaction, and 44% in customer engagement.

Connecting research insights to business outcomes remains a key challenge, though. While awareness for measuring research impact is growing (73% of respondents track the impact of their research), 41% reported they find it challenging to translate research insights into measurable business outcomes. Other significant challenges teams face are time and bandwidth constraints (62%) and recruiting the right participants (60%).

Growing A Research Mindset

With the demand for user research growing, product teams need to find ways to expand their research initiatives. 75% of the respondents in the Maze survey are planning to scale research in the next year by increasing the number of research studies, leveraging AI tools, and providing training to promote research democratization.

Janelle Ward, Founder of Janelle Ward Insights, sees great potential in growing research practices, as an organization will grow a research mindset in tandem. She shares:

“Not only will external benefits like competitive advantage come into play, but employees inside the organization will also better understand how and why important business decisions are made, resulting in more transparency from leadership and a happier and more thriving work culture for everyone.”

Trend 2: Research Democratization Empowers Stronger Decision-Making

Research democratization involves empowering different teams to run research and get access to the insights they need to make confident decisions. The Future of User Research Report shows that in addition to researchers, product designers (61%), product managers (38%), and marketers (17%) conduct user research at their companies to inform their decision-making.

Teams with a democratized research culture reported a greater impact on decision-making. They are 2× more likely to report that user research influences strategic decisions, 1.8× more likely to state that it impacts product decisions, and 1.5× more likely to express that it inspires new product opportunities.

The User Researcher’s New Role

Now, if more people are conducting user research in an organization, does this mark the end of the user researcher role? Not at all. Scaling research through democratization doesn’t mean anyone can do any type of research. You’ll need the proper checks and balances to allow everyone to participate in research responsibly and effectively. The role is shifting from a purely technical to an educational role where user researchers become responsible for guiding the organization in its learning and curiosity.

To guarantee data quality and accuracy, user researchers can train partners on research methods and best practices and give them hands-on experience before they start their own research projects. This can involve having them shadow a researcher during a project, holding mock interviews, or leading collaborative analysis workshops.

Democratizing user research also means that UX researchers can open up time to focus on more complex research initiatives. While tactical research, such as usability testing, can be delegated to designers and product managers, UX researchers can conduct foundational studies to inform the product and business strategy.

User Research Tools And Techniques

It’s also interesting to see which tools and techniques product teams use to gather user insights. Maze (46%), Hotjar (26%), and UserTesting (24%) are the most widely used user research tools. When it comes to user research methods, product teams mostly turn to user interviews (89%), usability testing (85%), surveys (82%), and concept testing (56%).

According to Morgan Mullen, Lead UX Researcher at User Interviews, a factor to consider is the type of projects teams conduct. Most teams don’t change their information architecture regularly, which requires tree testing or card sorting. But they’re likely launching new features often, making usability testing a more popular research method.

Trend 3: New Technology Allows Product Teams To Significantly Scale Research

AI is reshaping how we work in countless ways, and user research is no exception. According to the Future of User Research Report, 44% of product teams are already using AI tools to run research and an additional 41% say they would like to adopt AI tools in the future.

ChatGPT is the most widely-used AI tool for conducting research (82%), followed by Miro AI (20%), Notion AI (18%), and Gemini (15%). The most commonly used research tools with AI features are Maze AI (15%), UserTesting AI (9%), and Hotjar AI (5%).

The Strengths Of AI

The tactical aspect of research is where AI truly shines. More than 60% of respondents use AI to analyze user research data, 54% for transcription, 48% for generating research questions, and 45% for synthesis and reporting. By outsourcing these tasks to artificial intelligence, respondents reported that their team efficiency improved (56%) and turnaround time for research projects decreased (50%) — freeing up more time to focus on the human and strategic side of research (35%).

The Irreplaceable Value Of Research

While AI is great at tackling time-consuming, tactical tasks, it is not a replacement for a skilled researcher. As Kate Pazoles, Head of Flex User Research at Twilio, points out, we can think of AI as an assistant. The value lies in connecting the dots and uncovering insights with a level of nuance that only UX researchers possess.

Jonathan Widawski, co-founder and CEO at Maze, sums up the growing role that AI plays in user research as follows:

“AI will be able to support the entire research process, from data collection to analysis. With automation powering most of the tactical aspects, a company’s ability to build products fast is no longer a differentiating factor. The key now lies in a company’s ability to build the right product — and research is the power behind all of this.”

Looking Ahead

With teams adopting a democratized user research culture and AI tools on the rise, the user researcher’s role is shifting towards that of a strategic partner for the organization.

Instead of gatekeeping their knowledge, user researchers can become facilitators and educate different teams on how to engage with customers and use those insights to make better decisions. By doing so, they help ensure research quality and accuracy conducted by non-researchers, while opening up time to focus on more complex, strategic research. Adopting a research mindset also helps teams value user research more and foster a happier, thriving work culture. A win-win for the organization, its employees, and customers.

If you’d like more data and insights, read the full Future of User Research Report by Maze here.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Setting And Persisting Color Scheme Preferences With CSS And A “Touch” Of JavaScript]]> https://smashingmagazine.com/2024/03/setting-persisting-color-scheme-preferences-css-javascript/ https://smashingmagazine.com/2024/03/setting-persisting-color-scheme-preferences-css-javascript/ Mon, 25 Mar 2024 12:00:00 GMT Many modern websites give users the power to set a site-specific color scheme preference. A basic implementation is straightforward with JavaScript: listen for when a user changes a checkbox or clicks a button, toggle a class (or attribute) on the <body> element in response, and write the styles for that class to override design with a different color scheme.

CSS’s new :has() pseudo-class, supported by major browsers since December 2023, opens many doors for front-end developers. I’m especially excited about leveraging it to modify UI in response to user interaction without JavaScript. Where previously we have used JavaScript to toggle classes or attributes (or to set styles directly), we can now pair :has() selectors with HTML’s native interactive elements.

Supporting a color scheme preference, like “Dark Mode,” is a great use case. We can use a <select> element anywhere that toggles color schemes based on the selected <option> — no JavaScript needed, save for a sprinkle to save the user’s choice, which we’ll get to further in.

Respecting System Preferences

First, we’ll support a user’s system-wide color scheme preferences by adopting a “Light Mode”-first approach. In other words, we start with a light color scheme by default and swap it out for a dark color scheme for users who prefer it.

The prefers-color-scheme media feature detects the user’s system preference. Wrap “dark” styles in a prefers-color-scheme: dark media query.

selector {
  /* light styles */

  @media (prefers-color-scheme: dark) {
    /* dark styles */
  }
}

Next, set the color-scheme property to match the preferred color scheme. Setting color-scheme: dark switches the browser into its built-in dark mode, which includes a black default background, white default text, “dark” styles for scrollbars, and other elements that are difficult to target with CSS, and more. I’m using CSS variables to hint that the value is dynamic — and because I like the browser developer tools experience — but plain color-scheme: light and color-scheme: dark would work fine.

:root {
  /* light styles here */
  color-scheme: var(--color-scheme, light);

  /* system preference is "dark" */
  @media (prefers-color-scheme: dark) {
    --color-scheme: dark;
    /* any additional dark styles here */
  }
}
Giving Users Control

Now, to support overriding the system preference, let users choose between light (default) and dark color schemes at the page level.

HTML has native elements for handling user interactions. Using one of those controls, rather than, say, a <div> nest, improves the chances that assistive tech users will have a good experience. I’ll use a <select> menu with options for “system,” “light,” and “dark.” A group of <input type="radio"> would work, too, if you wanted the options right on the surface instead of a dropdown menu.

<select id="color-scheme">
  <option value="system" selected>System</option>
  <option value="light">Light</option>
  <option value="dark">Dark</option>
</select>

Before CSS gained :has(), responding to the user’s selected <option> required JavaScript, for example, setting an event listener on the <select> to toggle a class or attribute on <html> or <body>.

But now that we have :has(), we can now do this with CSS alone! You’ll save spending any of your performance budget on a dark mode script, plus the control will work even for users who have disabled JavaScript. And any “no-JS” folks on the project will be satisfied.

What we need is a selector that applies to the page when it :has() a select menu with a particular [value]:checked. Let’s translate that into CSS:

:root:has(select option[value="dark"]:checked)

We’re defaulting to a light color scheme, so it’s enough to account for two possible dark color scheme scenarios:

  1. The page-level color preference is “system,” and the system-level preference is “dark.”
  2. The page-level color preference is “dark”.

The first one is a page-preference-aware iteration of our prefers-color-scheme: dark case. A “dark” system-level preference is no longer enough to warrant dark styles; we need a “dark” system-level preference and a “follow the system-level preference” at the page-level preference. We’ll wrap the prefers-color-scheme media query dark scheme styles with the :has() selector we just wrote:

:root {
  /* light styles here */
  color-scheme: var(--color-scheme, light);

  /* page preference is "system", and system preference is "dark" */
  @media (prefers-color-scheme: dark) {
    &:has(#color-scheme option[value="system"]:checked) {
      --color-scheme: dark;
      /* any additional dark styles, again */
    }
  }
}

Notice that I’m using CSS Nesting in that last snippet. Baseline 2023 has it pegged as “Newly available across major browsers” which means support is good, but at the time of writing, support on Android browsers not included in Baseline’s core browser set is limited. You can get the same result without nesting.

:root {
  /* light styles */
  color-scheme: var(--color-scheme, light);

  /* page preference is "dark" */
  &:has(#color-scheme option[value="dark"]:checked) {
    --color-scheme: dark;
    /* any additional dark styles */
  }
}

For the second dark mode scenario, we’ll use nearly the exact same :has() selector as we did for the first scenario, this time checking whether the “dark” option — rather than the “system” option — is selected:

:root {
  /* light styles */
  color-scheme: var(--color-scheme, light);

  /* page preference is "dark" */
  &:has(#color-scheme option[value="dark"]:checked) {
    --color-scheme: dark;
    /* any additional dark styles */
  }

  /* page preference is "system", and system preference is "dark" */
  @media (prefers-color-scheme: dark) {
    &:has(#color-scheme option[value="system"]:checked) {
      --color-scheme: dark;
      /* any additional dark styles, again */
    }
  }
}

Now the page’s styles respond to both changes in users’ system settings and user interaction with the page’s color preference UI — all with CSS!

But the colors change instantly. Let’s smooth the transition.

Respecting Motion Preferences

Instantaneous style changes can feel inelegant in some cases, and this is one of them. So, let’s apply a CSS transition on the :root to “ease” the switch between color schemes. (Transition styles at the :root will cascade down to the rest of the page, which may necessitate adding transition: none or other transition overrides.)

Note that the CSS color-scheme property does not support transitions.

:root {
  transition-duration: 200ms;
  transition-property: /* properties changed by your light/dark styles */;
}

Not all users will consider the addition of a transition a welcome improvement. Querying the prefers-reduced-motion media feature allows us to account for a user’s motion preferences. If the value is set to reduce, then we remove the transition-duration to eliminate unwanted motion.

:root {
  transition-duration: 200ms;
  transition-property: /* properties changed by your light/dark styles */;

  @media screen and (prefers-reduced-motion: reduce) {
    transition-duration: none;
  }
}

Transitions can also produce poor user experiences on devices that render changes slowly, for example, ones with e-ink screens. We can extend our “no motion condition” media query to account for that with the update media feature. If its value is slow, then we remove the transition-duration.

:root {
  transition-duration: 200ms;
  transition-property: /* properties changed by your light/dark styles */;

  @media screen and (prefers-reduced-motion: reduce), (update: slow) {
    transition-duration: 0s;
  }
}

Let’s try out what we have so far in the following demo. Notice that, to work around color-scheme’s lack of transition support, I’ve explicitly styled the properties that should transition during theme changes.

See the Pen CSS-only theme switcher (requires :has()) [forked] by Henry.

Not bad! But what happens if the user refreshes the pages or navigates to another page? The reload effectively wipes out the user’s form selection, forcing the user to re-make the selection. That may be acceptable in some contexts, but it’s likely to go against user expectations. Let’s bring in JavaScript for a touch of progressive enhancement in the form of…

Persistence

Here’s a vanilla JavaScript implementation. It’s a naive starting point — the functions and variables aren’t encapsulated but are instead properties on window. You’ll want to adapt this in a way that fits your site’s conventions, framework, library, and so on.

When the user changes the color scheme from the <select> menu, we’ll store the selected <option> value in a new localStorage item called "preferredColorScheme". On subsequent page loads, we’ll check localStorage for the "preferredColorScheme" item. If it exists, and if its value corresponds to one of the form control options, we restore the user’s preference by programmatically updating the menu selection.

/*
 * If a color scheme preference was previously stored,
 * select the corresponding option in the color scheme preference UI
 * unless it is already selected.
 */
function restoreColorSchemePreference() {
  const colorScheme = localStorage.getItem(colorSchemeStorageItemName);

  if (!colorScheme) {
    // There is no stored preference to restore
    return;
  }

  const option = colorSchemeSelectorEl.querySelector([value=${colorScheme}]);
if (!option) { // The stored preference has no corresponding option in the UI. localStorage.removeItem(colorSchemeStorageItemName); return; } if (option.selected) {
// The stored preference's corresponding menu option is already selected return; } option.selected = true; } /* * Store an event target's value in localStorage under colorSchemeStorageItemName */ function storeColorSchemePreference({ target }) { const colorScheme = target.querySelector(":checked").value; localStorage.setItem(colorSchemeStorageItemName, colorScheme); } // The name under which the user's color scheme preference will be stored. const colorSchemeStorageItemName = "preferredColorScheme"; // The color scheme preference front-end UI. const colorSchemeSelectorEl = document.querySelector("#color-scheme"); if (colorSchemeSelectorEl) { restoreColorSchemePreference(); // When the user changes their color scheme preference via the UI, // store the new preference. colorSchemeSelectorEl.addEventListener("input", storeColorSchemePreference); }

Let’s try that out. Open this demo (perhaps in a new window), use the menu to change the color scheme, and then refresh the page to see your preference persist:

See the Pen CSS-only theme switcher (requires :has()) with JS persistence [forked] by Henry.

If your system color scheme preference is “light” and you set the demo’s color scheme to “dark,” you may get the light mode styles for a moment immediately after reloading the page before the dark mode styles kick in. That’s because CodePen loads its own JavaScript before the demo’s scripts. That is out of my control, but you can take care to improve this persistence on your projects.

Persistence Performance Considerations

Where things can get tricky is restoring the user’s preference immediately after the page loads. If the color scheme preference in localStorage is different from the user’s system-level color scheme preference, it’s possible the user will see the system preference color scheme before the page-level preference is restored. (Users who have selected the “System” option will never get that flash; neither will those whose system settings match their selected option in the form control.)

If your implementation is showing a “flash of inaccurate color theme”, where is the problem happening? Generally speaking, the earlier the scripts appear on the page, the lower the risk. The “best option” for you will depend on your specific stack, of course.

What About Browsers That Don’t Support :has()?

All major browsers support :has() today Lean into modern platforms if you can. But if you do need to consider legacy browsers, like Internet Explorer, there are two directions you can go: either hide or remove the color scheme picker for those browsers or make heavier use of JavaScript.

If you consider color scheme support itself a progressive enhancement, you can entirely hide the selection UI in browsers that don’t support :has():

@supports not selector(:has(body)) {
  @media (prefers-color-scheme: dark) {
    :root {
      /* dark styles here */
    }
  }

  #color-scheme {
    display: none;
  }
}

Otherwise, you’ll need to rely on a JavaScript solution not only for persistence but for the core functionality. Go back to that traditional event listener toggling a class or attribute.

The CSS-Tricks “Complete Guide to Dark Mode” details several alternative approaches that you might consider as well when working on the legacy side of things.

]]>
hello@smashingmagazine.com (Henry Bley-Vroman)
<![CDATA[Crafting Experiences: Uniting Rikyu’s Wisdom With Brand Experience Principles]]> https://smashingmagazine.com/2024/03/uniting-rikyu-wisdom-brand-experience-principles/ https://smashingmagazine.com/2024/03/uniting-rikyu-wisdom-brand-experience-principles/ Thu, 21 Mar 2024 15:00:00 GMT In today’s dynamic and highly competitive market, the concept of brand experience is a key aspect of customer engagement: designers, take note.

Brand experience refers to all customer interactions and engagements with a brand, encompassing various brand channels, products, services, and encounters from the company website to unpacking its product. It involves following the user each time she comes into contact with the brand and ensuring that her experience is consistent and pleasant.

Beyond merely designing products or services, the designers or design team (along with the marketing department) must strive to create memorable, emotional, and immersive interactions with their customers. A compelling brand experience attracts and retains customers while reinforcing the brand promise.

Achieving this goal can be daunting but not impossible as long as designers follow specific principles. Recently, I attended a tea ceremony in the Japanese city of Kyoto, where I was introduced to Rikyu’s timeless wisdom. With fascination, I saw that such wisdom and insight could be applied to the principles of a compelling brand experience in the following ways.

The Japanese Tea Ceremony, According to Tea Master Rikyu

The seven principles of Rikyu were developed by Sen no Rikyu, a revered tea master of the 16th century. Each principle encapsulates the essence of the Japanese tea ceremony, emphasizing not only the preparation of tea but also the creation of a harmonious, meaningful experience.

During my own captivating tea ceremony experience, I gained valuable insights and a fresh perspective on how designers can help create meaningful connections between brands and their audiences, much as the tea ceremony has done for generations.

Rule One: Making a Satisfying Bowl of Tea

The first principle of Rikyu goes right to the heart of the tea ceremony: preparing a satisfying bowl of tea.

This deceptively simple principle reminds designers that everything we design for a brand should be able to provide a memorable experience for the final user. We should aim to go beyond simple brand and customer transactions and instead focus on crafting experiences through products and services.

Examples:

  • Airbnb,
  • Duolingo.

Both of them facilitate extraordinary experiences beyond the basic user interaction of “rent a house for my trip” and “learn a foreign language.”

Airbnb: Redefining Travel Through Experience

Compared to competitors like Booking.com, Airbnb has completely redefined the experience of travelling, adding a strong storytelling aspect.

From the beginning, the brand has offered a way for travelers to truly immerse themselves in the culture and lifestyle of their destinations.

Today, Airbnb’s website shows the brand offering the possibility of “living” in an extraordinary place, from cozy apartments to extravagant castles. We can see that their brand isn’t just about finding the right accommodation but also about creating enduring memories and stories.

Their services have been significantly updated in recent years, offering customers great flexibility to book in total safety from qualified hosts (called Superhosts) with homes that have been reviewed and reflect Airbnb quality standards.

Takeaway: Aim to create experiences that stay with people long after they have interacted with your brand.

Duolingo: Language-Learning as a Playful Adventure

Language learning is often considered a daunting task, one that pushes us out of our comfort zones. But Duolingo, with its playful and gamified approach, is changing that perception.

Their app has transformed language learning into a delightful adventure that anyone can join, even for just five minutes a day.

By creating characters that team up with Duo (the owl mascot), Duolingo injects a sense of companionship and relatability into language learning, making it feel like taking a journey alongside a helpful friend.

Takeaway: Break down complex tasks into enjoyable, bite-sized experiences that improve the long-term experience.

Rule Two: Efficiently Laying the Charcoal for Boiling Water

As I took my place in the tea room, just opposite the tea master, he explained that charcoal plays an extremely important role in the ceremony. It must be precisely placed to encourage airflow, prevent the fire from extinguishing prematurely, and prepare tea at the perfect temperature.

For designers, this translates into creating a comprehensive set of guidelines and rules that dictate how every brand element should look, feel, and behave.

Much like the precise arrangement of charcoal, a well-designed brand system is the foundation of consistent and efficient brand representation that ensures harmony and coherence across every touchpoint.

This may seem obvious, but it is only in the last decade that technology companies have started creating elaborate and complete brand guidelines.

Examples:

  • IBM,
  • Atlassian.

IBM: Consistency Breeds Loyalty and Recognisability

When we think about the connection between brand and technology, it’s natural to think immediately of Apple and Steve Jobs. So you could be surprised that in fact, IBM was one of the first tech companies to hire a professional graphic designer.

Acclaimed graphic designer Paul Rand designed the iconic IBM logo in 1956. The collaboration between Paul Rand and the company went on for many years, becoming a benchmark for the integration of design principles into the corporate identity of a tech company.

Even today, IBM’s design system Carbon is a testament to the power of simplicity and consistency. Focusing on clarity and functionality, IBM’s brand elements work seamlessly across a diverse range of products and services, including events and workplaces. The Carbon design system is also open source, meaning anyone can contribute to improving it.

Takeaway: A consistent and well-designed brand identity allows for organic growth and expansion without diluting the brand, reinforcing brand loyalty and recognition.

Atlassian: Guiding Future Decisions

Atlassian is a software company with a diverse product portfolio. Their design system promotes scalability and flexibility, while their brand elements are designed to adapt harmoniously across various Atlassian applications.

This adaptability ensures a unified brand experience while accommodating the unique characteristics of each product. It serves as a compass, helping designers navigate the vast landscape of possibilities and ensuring that each design decision made for each Atlassian product aligns with the brand’s essence.

Takeaway: A strong design foundation serves as an invaluable guide as brands evolve and expand their offering through more different products and services.

Rule 3: Providing Warmth in Winter and Coolness in Summer

In the art of the Japanese tea ceremony, the provision of warmth in winter and coolness in summer is a delicate balance, attuned to the emotional and physical states of the participants. This is well-reflected in the tea room’s decoration, and the tea served, depending on the hour and the season, in a bowl chosen by the tea master.

When I attended the tea ceremony, the room was decorated to reflect the spring season. The sweet was also inspired by the blooming cherry blossoms, which were pink and light green. The door to the garden was left open so that we could appreciate the scent of fresh blossoms in the gentle spring breeze.

In the design world, this rule translates into the profound understanding and adaptation of experiences to align with customers’ ever-changing needs and emotional states throughout their journey.

Understanding the natural flow of emotions during the user journey allows brands to create responsive experiences that feel personal.

Examples:

  • Nike,
  • Netflix.

Nike: Versatility in Style and Experience

Nike, better than any other brand leader in sportswear, exemplifies mastery in tailoring brand experiences.

The brand recognizes that customers engage with their products across diverse activities.

For this reason, Nike offers a wide range of products, sometimes presented with mini-websites and beautiful campaigns, each with its own distinct style and purpose.

Takeaway: By catering to their users’ varied tastes and needs, brands can tailor experiences to individual preferences and emotions, fostering a deeper connection and resonance.

Netflix: Personalised Home Entertainment

Netflix has deftly pioneered the use of advanced algorithms and artificial intelligence to tailor its content recommendations. These are not only based on geographic location but individual user preferences.

The platform dynamically adjusts preview images and trailers, aiming to match each user’s unique taste.

Their latest update includes Dynamic Sizzle Reel, short personalized clips of upcoming shows that offer each member a unique and effective experience.

It is worth noting, however, that while Netflix puts effort into yielding greater engagement and enjoyment for their members, the subjective nature of taste can sometimes lead to surprises, where a preview may align perfectly with an individual user’s taste, yet the show itself varies in style.

Takeaway: When customizing experiences, designers should create an interplay between familiarity and novelty, tailoring content to individual tastes while respecting the user’s need for both comfort and discovery.

Rule 4: Arranging Flowers as Though They Were in the Field

As I stepped into the tea room, there was a sense of harmony and tranquillity infused by nature forming part of the interior environment.

The flowers were meticulously arranged in a pot as though plucked directly from the field at that very moment. According to Rikyu’s principles, their composition should be an ode to nature’s simplicity and authenticity.

For designers, this rule echoes the importance of using aesthetics to create a visually captivating brand experience that authentically reflects the brand’s values and mission.

The aesthetic choices in design can convey a brand’s essence, creating a harmonious and truthful representation of the brand and its services.

It is important to remember, however, that a visually appealing brand experience is not just about aesthetics alone, but using them to create an emotional and truthful connection with the audience.

Examples:

  • Kerrygold,
  • WWF.

Kerrygold: Forging Memorable Narratives

The Kerrygold “Magic Pantry” website is testament to the art of visual storytelling, following the brand’s mission to spread authentic Irish recipes and stories from Ireland and its farms.

Through a captivating storytelling game, users explore a recipe or storybook, pick a favorite dish based on their meal, and choose their assistant.

In a perfect story fashion, with a good amount of personalization, users then learn how to cook their chosen recipes using Kerrygold products.

This immersive experience showcases the excellence of Kerrygold’s products and conveys the brand’s commitment to quality and authenticity, while the storybook confers the idea of passing family traditions across the world (so common in the past!)

Takeaway: Through visuals, designers need to be authentic, reflecting the truth about the brand. This truthfulness enhances the credibility of the brand’s narrative and establishes deeper user connections.

WWF: Enhancing Memorability Through Beauty and Truth

WWF employs visual storytelling to raise awareness about environmental issues and species in danger of extinction. Their campaign websites always present a beautiful and immersive visual journey that authentically communicates the urgency of their mission.

While these two websites are grounded in the universal act of eating, WWF prompts users to reflect on their habits’ profound impact on the environment.

Both websites ingeniously guide users to think about food consumption in more detail, fostering a journey toward mindful eating that respects both species and the environment.

The websites adopt a quiz-like approach for users to reflect on and reassess their food consumption patterns, fostering a journey toward mindful eating that respects both species and the environment.

Beyond individual insights, the interactive nature of these platforms encourages users to extend their newfound knowledge to their friends, amplifying awareness of crucial topics such as food consumption, CO2 emissions, and sustainable alternatives.

Takeaway: By infusing elements of discovery and self-reflection, designers can help brands promote their values and missions while empowering their users to become ambassadors for change.

Rule 5: Being Ready Ahead of Time

In the Japanese tea ceremony, Rule 5 of Rikyu’s principles places emphasis on the seamless art of preparation, ensuring that everything is ready for the guests.

For their part, guests are expected to arrive on time for their appointment and be mindfully prepared to attend the ceremony.

Designers should note that this principle underscores the significance of foresight, careful planning, and meticulous attention — both to detail and the user’s time.

For brands, being ready ahead of time is paramount for delivering a seamless and exceptional customer experience.

By proactively addressing customer needs and meticulously planning every touchpoint, designers can create a seamless and memorable brand experience that fosters customer satisfaction and loyalty by respecting the value of time.

Examples:

  • IKEA,
  • Amazon.

IKEA: Anticipating Customer Expectations

IKEA, the global furniture and home goods giant, is a brand that, since the beginning, has used its vast warehouse store layout to anticipate and plan customers’ needs — even the unconscious ones. In fact, you could well be among those shoppers who plan to buy just a few items but leave the store with a trolley full of things they never knew they needed!

When customers enter an IKEA store, they encounter a meticulously planned and organized environment designed as a circular one-way system.

This specific layout of the IKEA store creates a sense of discovery. It encourages shoppers to keep walking through the different departments effortlessly, anticipating or projecting needs that they may have been unaware of before they entered.

Takeaway: Brands should harness the creative ability to tap into customers’ subconscious minds through environment and product display in a way that exceeds their expectations.

Amazon: A Ready-to-go Shopping Experience

Amazon understands that their customers’ time is valuable, creating seamless online and offline experiences that streamline the shopping experience. Their unique systems strive to avoid inconveniences and provide a quick, ready-to-go shopping experience.

For example, their patented one-click ordering system simplifies the checkout process, reducing friction by saving users the trouble of manually selecting or entering settings (like address and payment methods) that are used repeatedly.

Meanwhile, the brick-and-mortar Amazon Go stores exemplify innovation, offering a shopping experience where customers can grab items and go without waiting in line.

These stores work by using the same types of technologies found in self-driving cars, such as computer vision, sensor fusion, and deep learning.

This technology can detect when products are taken or returned to the shelves, tracking them in the customer’s virtual cart. When customers leave the store with their goods, their Amazon account is charged, and a receipt is sent.

Please note: Even though Amazon recently announced the closure of some of its physical shops, the experience remains an example of innovative and efficient shopping methods.

Takeaways: Ingrain the art of preparation by utilizing advanced technologies in the brand’s operational philosophy to avoid inconvenience and provide an effortless customer experience.

Rule 6: Being Prepared in Case It Should Rain

In the context of the Japanese tea ceremony, being prepared for rain means being prepared for unexpected challenges.

According to Rikyu, when having tea, the host must be intentionally calm and ready to accommodate any situation that arises. Of course, this doesn’t just apply to changes in the weather!

For designers crafting brand experiences, this principle underscores the importance of building resilience and adaptability into the core of their strategies.

Examples:

  • Zoom,
  • Lego.

Zoom: Pioneering Remote Communication

Zoom was mostly used for business meetings before the Covid-19 pandemic struck. When it did, it forced most companies to digitize far more quickly than they otherwise would have done.

Zoom stepped up, improving its features so that everyone, from children to their baby boomer grandparents, found the user interface seamless and easy when connecting from their homes.

One of the most exciting business decisions taken by Zoom was to turn their Freemium tier wholly free and unlimited for K-12 students. This decision was taken during the early stages of the pandemic (March 2020) demonstrating empathy with the challenges faced by K-12 educational institutions.

The program significantly impacted schools, teachers, and students. It allowed for more collaborative and engaging virtual classrooms, thanks to features like Groups and useful interactions like whiteboards, raising hands, and replying with emojis.

As schools gradually returned to in-person learning and adapted to hybrid models, the free program ended. However, the positive impact of Zoom’s support during a critical period underlined the company’s adaptability and responsiveness to societal needs.

Takeaway: Designers should prioritize creating intuitive interfaces and scalable infrastructure that can accommodate surges in demand whilst also considering the impact on society.

Lego: Rebuilding From The Bricks Up

Without continuous adaptability and recognition of the ever-changing landscape of play, even a historic brand like Lego may have closed its doors!

In fact, if you are a Lego fan, you may have noticed that the brand underwent a profound change in the early 2000s.

In 1998, Lego launched an educational initiative known as Lego Mindstorm. This project used Lego’s signature plastic bricks to teach children how to construct and program robots — an innovative concept at the time since Arduino had not yet been introduced.

Lego’s decision to merge traditional play with technology demonstrated their dedication to keeping up with the digital age. Additionally, Lego Mindstorm appealed to a new audience: the broader open-source hardware and DIY electronics community that emerged during the period (and who, in 2005, found a better match in Arduino).

Please note: Even though the program is set to be discontinued by 2024, Lego’s resurgence is often cited as one of the most successful corporate turnarounds.

Lego still continues to thrive, expanding its product lines, collaborating with popular franchises, and maintaining its status as a beloved brand among children and adults alike.

Takeaway: Designers can adapt to change by refocusing on the brand’s core strengths, embracing digital innovation and new targets to exemplify resilience in the face of challenges.

Rule 7: Acting with Utmost Consideration Towards Your Guests

During the tea ceremony in Kyoto, I perceived in every gesture the perfect level of attention to detail, including my response to the tasting and the experience as a whole. I felt the impact of my experience from the moment I entered until long after I left the tea room, even as I write about it now.

This rule highlights the importance of intuitive hospitality and involves creating an environment in which guests feel welcomed, valued, and respected.

For designers, this means facilitating brand experiences that put customer satisfaction first and aim to build strong and lasting relationships.

Brands that excel in this rule go above and beyond to provide uniquely personalized experiences that foster long-term loyalty.

Examples:

  • Stardust App,
  • Tony’s Chocolonely.

Stardust App: Empowering Women’s Health with Privacy and Compassion

Stardust is an astrology-based menstrual cycle-tracking app that debuted in the Apple Store. It became the most downloaded iPhone app in late June after the U.S. Supreme Court struck down Roe v. Wade (which ended the constitutional right to an abortion and instantly made abortion care illegal in more than a dozen states).

In a world where tracking apps often lack sensitivity, Stardust App emerges with an elegant interface that makes monitoring women’s health a visually pleasing experience. But beyond aesthetics, what really sets Stardust apart is its witty and humorous tone of voice.

Acknowledging the nuances of mood swings and pains associated with periods, Stardust’s notification messages and in-app descriptions resonate with women, adding a delightful touch to a typically challenging time.

This blend of sophistication and humor creates a unique and supportive space for women’s wellness.

Note:

The female-founded app underwent scrutiny from TechCrunch, Vice, and Screen Rant, which questioned their collaboration with third parties and its safety. So on October 29th, 2022, they released a more precise and comprehensive Privacy Policy that explains in a readable way how third parties are used and how the end-to-end encryption works.

They also ensured that all sessions were anonymous so that the app would not be able to associate data with users in case of law enforcement.

Takeaway: Design a brand experience with utmost consideration toward users and that transcends the transactional to foster an enduring sense of trust, empathy, and loyalty.

Tony’s Chocolonely: Sweet Indulgence and Ethical Excellence

In their commitment to fair trade, Tony’s Chocolonely exemplifies acting with utmost consideration towards both consumers and the environment beyond merely offering delicious chocolate.

More specifically, Tony’s Chocolonely has redefined the chocolate industry by championing fair-trade practices. By introducing a sustainable business model, not only do they satisfy the cravings of chocolate enthusiasts, but they also address the broader demand for ethically sourced and produced chocolate.

In every detail, from the wrapper to the chocolate bar itself, Tony’s Chocolonely is a brand on a mission. The intentional unevenness of their chocolate bar is a profound symbol, mirroring the uneven and unjust landscape of the chocolate industry. This urges consumers to choose fairness, ethical sourcing, and a commitment to change.

Takeaway: Designers can elevate brand experiences by integrating thoughtful and personalized elements that speak to their industry and resonate with the values of their audience.

Conclusion

In the gentle and artistic practice of the Japanese tea ceremony, I discovered through Rikyu’s seven principles an illuminated path of consideration that resonates beyond the tea room, offering profound insights for crafting compelling brand experiences.

Rikyu’s ancient wisdom serves as a timeless guide, reminding us that creating a memorable experience is a balanced dance between intention and harmony and, above all, the valuable attention of those we invite into our brand spaces as welcome guests.

]]>
hello@smashingmagazine.com (Chiara Aliotta)
<![CDATA[Now Shipping: Success At Scale, A New Smashing Book by Addy Osmani]]> https://smashingmagazine.com/2024/03/success-at-scale-book-release/ https://smashingmagazine.com/2024/03/success-at-scale-book-release/ Wed, 20 Mar 2024 11:00:00 GMT ”Success at Scale”. It’s filled with practical insights and real-world case studies of how big changes can be made on projects of any size. Addy Osmani has curated finest examples, case studies and interviews to help you get successful at scale. Jump to the details and get the book right away.]]> Building at scale is hard. With legacy, fragile systems, large impact and complex front-end architecture, making big changes to an existing site or app might feel nothing short of daunting. We might have a very motivated team, but we don't always know how to get there. Enter Success at Scale, our shiny new book that might just help you to get big changes to work in your company. Jump to the details.

Rather than focusing on conventional project management steps and procedures, Success at Scale is about the ideas and processes that worked — and a few that didn’t. This book captures the challenges, frustrations, and wins through the eyes of the people who make change happen. With a strong focus on performance, capabilities, accessibility, and developer experience. Just published, print + eBook, and shipping around the world! Get a PDF sample (14.7MB), and get your book right away.

  • ISBN: 978-3-910835-00-9 (print)
  • Available in hardcover and eBook formats — PDF, ePUB, and Amazon Kindle.
  • Hardcover edition comes with free worldwide airmail shipping from Germany.
  • Download a free sample (PDF, 14.7MB).
  • Get the book right away
About The Book

Success at Scale is a curated collection of best-practice case studies capturing how production sites of different sizes tackle performance, accessibility, capabilities, and developer experience at scale. Case studies are from industry experts from teams at Instagram, Shopify, Netflix, eBay, Figma, Spotify, Wix, Lyft, LinkedIn, and many more. Guidance that stands the test of time.

Join Addy Osmani, your curator, as we dive into a nuanced look at several key topics that will teach you tips and tricks that may help you optimize your own sites. The book also includes short interviews with contributors on what additional lessons, challenges, and tips they have to share some time after the case studies were written.

  • Performance includes examples of measuring, budgeting, optimizing, and monitoring performance, in addition to tips for building a performance culture.
  • Capabilities is about bridging the gap between native capabilities and the modern web. You’ll explore web apps, native apps, and progressive web applications.
  • Accessibility makes web apps viable for diverse users, including people with temporary or permanent disabilities. Most of us will have a disability at some point in our lives, and these case studies show how we can make the web work for all of us.
  • Developer Experience is about building a project environment and culture that encourage support, growth, and problem-solving within teams. Strong teams build great projects!

High-quality hardcover, 304-page print book. Curated by Addy Osmani. Cover design by Espen Brunborg.

The book is full of real-world case studies, interviews, examples, and results. (Large preview) The book is divided into four clear sections: performance, capabilities, accessibility, and developer experience. There’s even an attached ribbon bookmark to help you save your place! (Large preview) Who Is This Book For?

This book is for professional web developers and teams who want to deliver high-quality web experiences and are looking for real-world examples and insights. Every project is different, but in this book, you will recognize common challenges and frustrations faced by many teams — maybe even your own — and see how other developers conquered them. Success at Scale goes beyond beginner material to cover pragmatic approaches required to tackle these projects in the real world.

About the Author

Addy Osmani is an engineering leader working on Google Chrome. He leads up Chrome’s Developer Experience organization, helping reduce the friction for developers to build great user experiences.

Testimonials
“Case studies based on real-world examples are a fantastic way to learn, and Addy Osmani has captured that perfectly in this book. The interviews are compelling and connect with actionable guidance you can get started with right away. Addy has a wealth of experience working with end-users, developers, and companies to understand what exactly makes for a successful web experience. As a result, Success At Scale is a treasure trove of wisdom and practical advice. Get this book if you want to build more accessible, performant, and resilient websites.”

Umar Hansa, Web Developer
“This book reveals in its pages the collective wisdom of frontend engineers working on global web projects. It’s the perfect way to enhance your web applications’ potential by optimizing performance with practical tips straight from the trenches.”

Luca Mezzalira, Principal Serverless Specialist Solutions Architect at Amazon Web Services (AWS) and O’Reilly author
Success at Scale, masterfully curated by Addy Osmani, serves as an invaluable compass for the aspiring developers navigating the complex waters of web development. It’s more than a book; it’s a transmission of wisdom, guiding junior developers towards the shores of big tech companies. With its in-depth case studies and focus on performance, capabilities, accessibility, and developer experience, it prepares the next generation of tech talent to not just participate in, but to shape the future of digital innovation.”

Jerome Hardaway, Engineering AI products at Microsoft
We’re Trying Out Something New

In an effort to conserve resources here at Smashing, we’re trying something new with Success at Scale. The printed book is 304 pages, and we make an expanded PDF version available to everyone who purchases a print book. This accomplishes a few good things:

  • We will use less paper and materials because we are making a smaller printed book;
  • We’ll use fewer resources in general to print, ship, and store the books, leading to a smaller carbon footprint; and
  • Keeping the book at more manageable size means we can continue to offer free shipping on all Smashing orders!

Smashing Books have always been printed with materials from FSC Certified forests. We are committed to finding new ways to conserve resources while still bringing you the best possible reading experience.

Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Get Print + eBook

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Get Print + eBook

Interface Design Checklists

100 practical cards for common interface design challenges.

Get Print + eBook

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Sketchnotes And Key Takeaways From SmashingConf Antwerp 2023]]> https://smashingmagazine.com/2024/03/sketchnotes-key-takeaways-smashingconf-antwerp-2023/ https://smashingmagazine.com/2024/03/sketchnotes-key-takeaways-smashingconf-antwerp-2023/ Mon, 18 Mar 2024 18:00:00 GMT I have been reading and following Smashing Magazine for years — I’ve read many of the articles and even some of the books published. I’ve also been able to attend several Smashing workshops, and perhaps one of the peak experiences of my isolation times was the online SmashingConf in August 2020. Every detail of that event was so well-designed that I felt genuinely welcomed. The mood was exceptional, and even though it was a remote event, I experienced similar vibes to an in-person conference. I felt the energy of belonging to a tribe of other great design professionals.

I was really excited to find out that the talks at SmashingConf Antwerp 2023 were going to be focused on design and UX! This time, I attended remotely again, just like back in 2020: I could watch and live-sketch note seven talks (and I’m already looking forward to watching the remaining talks I couldn’t attend live).

Even though I participated remotely, I got really inspired. I had a lot of fun, and I felt truly involved. There was an online platform where the talks were live-streamed, as well as a dedicated Slack channel for the conference attendees. Additionally, I shared my key takeaways and sketchnotes right after each talk on social media. That way, I could have little discussions around the topics &mdash, even though I wasn’t there in person.

In this article, I would like to offer a brief summary of each talk, highlighting my takeaways (and my screenshots). Then, I will share my sketchnotes of those seven talks (+ two more I watched after the conference).

Day 1 Talks

Introduction

At the very beginning of the conference, Vitaly said hello to everyone watching online, so even though I participated remotely, I felt welcomed. :-) He also shared that there is an overarching mystery theme of the conference, and the first one who could guess it would get a free ticket for the next Smashing conference — I really liked this gamified approach.

Vitaly also reminded us that we should share our success stories as well as our failure stories (how we’ve grown, learned, and improved over time).

We were introduced to the Pac-man rule: if we are having a conversation, and someone is speaking from the back and wants to join, open the door for them — just like Pac-man does (well, Pac-man opens his mouth because he wants to eat, you want to encourage conversations).

In between talks, Vitaly told us a lot of design jokes; for instance, this one related to design systems was a great fit for the first talk:

Where did Gray 500 and Button Primary go on their first date?

To a naming convention.

After this little warm-up, Molly Hellmuth delivered the first talk of the event. Molly has been a great inspiration for me not only as a design system consultant but also as a content creator and community builder. I’m also enthusiastic about learning the more advanced aspects of Figma, so I was really glad that Molly chose this topic for her talk.

“Design System Traps And Pitfalls” by Molly Hellmuth

Molly is a design system expert specializing in Figma design systems, and she teaches a course called Design System Bootcamp. Every time she runs this course, she sees students make similar mistakes. In this talk, she shared the most common mistakes and how to avoid them.

Molly shared the most common mistakes she experienced during her courses:

  • Adopting new features too quickly,
  • Adding too many color variables,
  • Using groups instead of frames,
  • Creating jumbo component sets,
  • Not prepping icons for our design system.

She also shared some rapid design tips:

  • Set the nudge amount to 8
  • We can hide components in a library by adding a period or an underscore
  • We can go to a specific layer by double-clicking on the layer icon
  • Scope variables, e.g., colors meant for text is, only available for text
  • Use auto layout stacking order (it is not only for avatars, e.g., it is great for dropdown menus, too).

“How AI Ate My Website” by Luke Wroblewski

I have been following Luke Wroblewski since the early days of my design career. I read his book “Web Form Design: Filling in the Blanks” back in 2011, so I was really excited to attend his talk. Also, the topic of AI and design has been a hot one lately, so I was very curious about the conversational interface he created.

Luke has been creating content for 27 years; for example, there are 2,012 articles on his website. There are also videos, books, and PDFs. He created an experience that lets us ask questions from AI that have been fed with this data (all of his articles, videos, books, and so on).

In his talk, he explained how he created the interaction pattern for this conversational interface. It is more like a FAQ pattern and not a chatbot pattern. Here are some details:

  • He also tackled the “what I should ask” problem by providing suggested questions below the most recent answer; that way, he can provide a smoother, uninterrupted user flow.

  • He linked all the relevant sources so that users can dig deeper (he calls it the “object experience”). Users can click on a citation link, and then they are taken to, e.g., a specific point of a video.

He also showed us how AI eats all this stuff (e.g., processing, data cleaning) and talked about how it assembles the answers (e.g., how to pick the best answers).

So, to compare Luke’s experience to e.g., Chat GPT, here are some points:

  • It is more opinionated and specific (Chat GPT gives a “general world knowledge” answer);
  • We can dig deeper by using the relevant resources.

You can try it out on the ask.lukew.com website.

“A Journey in Enterprise UX” by Stéphanie Walter

Stéphanie Walter is also a huge inspiration and a designer friend of mine. I really appreciate her long-form articles, guides, and newsletters. Additionally, I have been working in banking and fintech for the last couple of years, so working for an enterprise (in my case, a bank) is a situation I’m familiar with, and I couldn’t wait to hear about a fellow designer’s perspective and insights about the challenges in enterprise UX.

Stéphanie’s talk resonated with me on so many levels, and below is a short summary of her insightful presentation.

On complexity, she discussed the following points:

  1. Looking at quantitative data: What? How much?
    Doing some content analysis (e.g., any duplicates?)
  2. After the “what” and discovering the “as-is”: Why? How?
    • By getting access to internal users;
    • Conducting task-focused user interviews;
    • Documenting everything throughout the process;
    • “Show me how you do this today” to tackle the “jumping into solutions” mindset.

Stéphanie shared with us that there are two types of processes:

  • Fast track
    Small features, tweaks on the UI — in these cases, there is no time or no need to do intensive research; it involves mostly UI design.
  • Specific research for high-impact parts
    When there is a lot of doubt (“we need more data”). It involves gathering the results of the previous research activities; scheduling follow-up sessions; iterating on design solutions and usability testing with prototypes (usually using Axure).
    • Observational testing
      “Please do the things you did with the old tool but with the new tool” (instead of using detailed usability test scripts).
    • User diary + longer studies to help understand the behavior over a period of time.

She also shared what she wishes she had known sooner about designing for enterprise experiences, e.g., it can be a trap to oversimplify the UI or the importance of customization and providing all the data pieces needed.

It was also very refreshing that she corrected the age-old saying about user interfaces: you know, the one that starts with, “The user interface is like a joke...”. The thing is, sometimes, we need some prior knowledge to understand a joke. This fact doesn’t make a joke bad. It is the same with user interfaces. Sometimes, we just need some prior knowledge to understand it.

Finally, she talked about some of the main challenges in such environments, like change management, design politics and complexity.

Her design process in enterprise UX looks like this:

  • Complexity
    How am I supposed to design that?
  • Analysis
    Making sense of this complexity.
  • Research
    Finding and understanding the puzzle pieces.
  • Solution design
    Eventually, everything clicks into place.

The next talk was about creating a product with a Point of View, meaning that a product’s tone of voice can be “unique,” “unexpected,” or “interesting.”

“Designing A Product With A Point Of View” by Nick DiLallo

Unlike in the case of the other eight speakers whose talks I sketched, I wasn’t familiar with Nick’s work before the conference. However, I’m really passionate about UX writing (and content design), so I was excited to hear Nick’s points. After his talk, I have become a fan of his work; check out his great articles on Medium).

In his talk, Nick DiLallo shared many examples of good and not-so-good UX copies.

His first tip was to start with defining our target audience since the first step towards writing anything is not writing. Rather, it is figuring out who is going to be reading it. If we manage to define who will be reading as a starting point, we will be able to make good design decisions for our product.

For instance, instead of designing for “anyone who cooks a lot”, it is a lot better to design for “expert home chefs”. We don’t need to tell them to “salt the water when they are making pasta”.

After defining our audience, the next step is saying something interesting. Nick’s recommendation is that we should start with one good sentence that can unlock the UI and the features, too.

The next step is about choosing good words; for example, instead of “join” or “subscribe,” we can say “become a member.” However, sometimes we shouldn’t get too creative, e.g., we should never say “add to submarine” instead of “add to cart” or “add to basket”.

We should design our writing. This means that what we include signals what we care about, and the bigger something is visual, the more it will stand out (it is about establishing a meaningful visual hierarchy).

We should also find moments to add voice, e.g., the footer can contain more than a legal text. On the other hand, there are moments and places that are not for adding more words; for instance, a calendar or a calculator shouldn’t contain brand voice.

Nick also highlighted that the entire interface speaks about who we are and what our worldview is. For example, what options do we include when we ask the user’s gender?

He also added that what we do is more important than what we write. For example, we can say that it is a free trial, but if the next thing the UI asks is to enter our bank card details, well, it’s like saying that we are vegetarian, and then we eat a cheeseburger in front of me.

Nick closed his talk by saying that companies should hire writers or content designers since words are part of the user experience.

“When writing and design work together, the results are remarkable.”

“The Invisible Power of UI Typography” by Oliver Schöndorfer

This year, Oliver has quickly become one of my favorite design content creators. I attended some of his webinars, I’m a subscriber of his Font Friday newsletter, and I really enjoy his “edutainment style”. He is like a stand-up comedian. His talks and online events are full of great jokes and fun, but at the same time, Oliver always manages to share his extensive knowledge about typography and UI design. So I knew that the following talk was going to be great. :)

During his talk, Oliver redesigned a banking app screen live, gradually adding the enhancements he talked about. His talk started with this statement:

“The UI is the product, and a big part of it is the text.”

After that, he asked an important question:

“How can we make the type work for us?”

Some considerations we should keep in mind:

  • Font Choice
    System fonts are boring. We should think about what the voice of our product is! So, pick fonts that:
    • are in the right category (mostly sans, sometimes slabs),
    • have even strokes with a little contrast (it must work in small sizes),
    • have open-letter shapes,
    • have letterforms that are easy to distinguish (the “Il1” test).

  • Hierarchy
    i.e. “What is the most important thing in this view?”

Start with the body text, then emphasize and deemphasize everything else — and watch out for the accessibility aspects (e.g. minimum contrast ratios).

Accessibility is important, too!

  • Spacing
    Relations should be clear (law of proximity) and be able to define a base unit.

Then we can add some final polish (and if it is appropriate, some delight).

As Oliver said, “Go out there and pimp that type!

Day 2 Talks

“Design Beyond Breakpoints” by Christine Vallaure

I’m passionate about the designer-developer collaboration topic (I have a course and some articles about it), so I was very excited to hear Christine’s talk! Additionally, I really appreciate all the Figma content she shares, so I was sure that I’d learn some new exciting things about our favorite UI design software.

Christine’s talk was about pushing the current limits of Figma: how to do responsive design in Figma, e.g., by using the so-called container queries. These queries are like media queries, but we are not looking at the viewport size. Instead, we are looking at the container. So a component behaves differently if, e.g., it is inside a sidebar, and we can also nest container queries, e.g., tell an icon button inside a card that upon resizing, the icon should disappear).

Recommended Reading: A Primer On CSS Container Queries by Stephanie Eckles

She also shared that there is a German fairy tale about a race between a hedgehog and a rabbit. The hedgehog wins the race even though he is slower. Since he is smarter, he sends his wife (who looks exactly like him) to the finish line in advance. Christine told us that she had mixed feelings about this story because she didn’t like the idea of pretending to be fast when someone has other great skills. In her analogy, the rabbits are the developers, and the hedgehogs are the designers. Her lesson was that we should embrace each others’ tools and skills instead of trying to mimic each others’ work.

The lesson of the talk was not really about pushing the limits. Rather, the talk was about reminding us of why we are doing all this:

  • To communicate our design decisions better to the developers,
  • To try out how our design behaves in different cases (e.g., where it should break and how), and
  • It is also great for documentation purposes; she recommended the EightShapes Specs plugin by Nathan Curtis.

Her advice is:

  • We should create a playground inside Figma and try out how our components and designs work (and let developers try out our demo, too);
  • Have many discussions with developers, and don’t start these discussions from zero, e.g., read a bit about frontend development and have a fundamental knowledge of development aspects.

“It’s A Marathon, And A Sprint” by Fabricio Teixeira

If you are a design professional, you have surely encountered at least a couple of articles published by the UX Collective, a very impactful design publication. Fabricio is one of the founders of that awesome corner of the Internet, so I knew that his talk would be full of insights and little details. He shared four case studies and included a lot of great advice.

During his talk, Fabricio used the analogy of running. When we prepare for a long-distance running competition, 80% of the time, we should do easy runs, and 20% of the time should be devoted to intensive because short interval runs get the best results. He also highlighted that just like during a marathon running, things will get hard during our product design projects, but we must remember how much we trained. When someone from the audience asked how not to get overly confident, he said that we should build an environment of trust so that other people on our team can make us realize if we’ve become too confident.

He then mentioned four case studies; all of these projects required a different, unique approach and design process:

  • Product requirements are not required.
    Vistaprint and designing face masks — the world needed them to design really fast; it was a 15-day sprint, and they did not have time to design all the color and sizing selectors (and only after the launch did it turn into a marathon).

  • Timelines aren’t straight lines.
    The case study of Equinox treadmill UI: they created a fake treadmill to prototype the experience; they didn’t wait for the hardware to get completed (the hardware got delayed due to manufacturing issues), so there was no delay in the project even in the face of uncertainty and ambiguity. For example, they took into account the hand reach zones, increased the spacing between UI elements so that these remained usable even while the user was running, and so on.

Exciting challenge: Average treadmill interface, a complicated dashboard, everything is fighting for our attention.

  • Research is a mindset, not a step.
    He mentioned the Gofundme project, where they applied a fluid approach to research meaning that design and research ran in parallel, the design informed research and vice versa. Also, insights can come from anyone from the team, not just from researchers. I really liked that they started a book club, everyone read a book about social impact, and they created a Figma file that served as a knowledge hub.

  • Be ready for some math
    During the New York City Transit project, they created a real-time map of the subway system, which required them to create a lot of vectors and do some math. One of the main design challenges was, “How to clean up complexity?”

Fabricio shared that we should be “flexibly rigorous”: just as during running, we should listen to our body, we should listen to the special context of a given project. There is no magic formula out there. Rigor and discipline is important, but we must listen to our body so that we don’t lose touch of reality.

The key takeaway is that because, we as a design community focus a lot on processes, and of course there is no one way to do design, we should combine sprints and marathons, adjust our approach to the needs of the given project, and most of all, focus more on principles, e.g. how we, as a team, want to work together?

A last note is when Fabricio mentioned in the post-talk discussion with Vitaly Friedman that having a 1–3-hour long kick-off meeting with our team is too short, we will work on something for e.g. 6 months, so Fabricio’s team introduced kick-off weeks.

Kat delivered one of the most important talks (or maybe the most important talk) of the conference. The ethics of design is a topic that has been around for many years now. Delivering a talk like this is challenging because it requires a perspective that easily gets lost in our everyday design work. I was really curious about how Kat would make us think and have us question our way of working.

“Design Ethically: From Imperative To Action” by Kat Zhou

Kat’s talk walked us through our current reality such as how algorithms have built in biases, manipulate users, hide content that shouldn’t be hidden, and don’t block things that shouldn’t be allowed. The main question, however, is:

Why is that happening? Why do designers create such experiences?

Kat’s answer is that companies must ruthlessly design for growth. And we, as designers, have the power to exercise control over others.

She showed us some examples of what she considers oppressive design, like the Panopticon by Jeremy Bentham. She also provided an example of hostile architecture (whose goal is to prevent humans from resting in public places). There are also dark patterns within digital experiences similar to the New York Times subscription cancellation flow (users had to make a call to cancel).

And the end goal of oppressive design is always to get more user data, more users’ time, and more of the users’ money. What amplifies this effect is that from an employee’s (designer’s) perspective, the performance is tied to achieving OKRs.

Our challenge is how we might redesign the design process so that it doesn’t perpetuate the existing systems of power. Kat’s suggestion is that we should add some new parts to the design process:

  • There are two phases:
    Intent: “Is this problem a worthy problem to solve?”
    Results: “What consequences do our solutions have? Who is it helping? Who is it harming?”
  • Add “Evaluate”:
    “Is the problem statement we defined even ethically worthy of being addressed?”
  • Add “Forecast”:
    “Can any ethical violations occur if we implement this idea?”
  • Add “Monitor”:
    “Are there any new ethical issues occurring? How can we design around them?”

Kat shared a toolkit and framework that help us understand the consequences of the things we are building.

Kat talked about forecasting in more detail. As she said,

“Forecasted consequences often are design problems.”

Our responsibility is to design around those forecasted consequences. We can pull a product apart by thinking about the layers of effect:

  • The primary layer of effect is intended and known, e.g.: Google Search is intended and known as a search engine.
  • The secondary effect is also known, and intended by the team, e.g. Google Search is an ad revenue generator.
  • The tertiary effect: typically unintended, possibly known, e.g. Algorithms of Oppression, Safiya Umoja Noble talks about the biases built in Google Search.

So designers should define and design ethical primary and secondary effects, and forecast tertiary effects, and ensure that they don’t pose any significant harm.

I first encountered atomic design in 2015, and I remember that I was so fascinated by the clear logical structure behind this mental model. Brad is one of my design heroes because I really admire all the work he has done for the design community. I knew that behind the “clickbait title” (Brad said it himself), there’ll be some great points. And I was right: he mentioned some ideas I have been thinking about since his talk.

“Is Atomic Design Dead?” by Brad Frost

In the first part of the talk, Brad gave us a little WWW history starting from the first website all the way to web components. Then he summarized that design systems inform and influence products and vice versa.

I really liked that he listed three problematic cases:

  • When the design system team is very separated, sitting in their ivory tower.
  • When the design system police put everyone in the design system jail for detaching an instance.
  • When the product roadmaps eat the design system efforts.

He then summarized the foundations of atomic design (atoms, molecules, organisms, templates and pages) and gave a nice example using Instagram.

He answered the question asked in the title of the talk: atomic design is not dead, since it is still a useful mental model for thinking about user interfaces, and it helps teams find a balance, and equilibrium between design systems and products.

And then here came the most interesting and thought-provoking part: where do we go from here?

  1. What if we don’t waste any more human potential on designing yet another date picker, but instead, we create a global design system together, collaboratively? It’d be an unstyled component that we can style for ourselves.

  2. The other topic he brought up is the use of AI, and he mentioned Luke Wroblewski’s talk, too. He also talked about the project he is working on with Kevin Coyle: it is about converting a codebase (and its documentation) to a format that GPT 4 can understand. Brad showed us a demo of creating an alert component using ChatGPT (and this limited corpus).

His main point was that since the “genie” is out of the bottle, it is on us to use AI more responsibly. Brad closed his talk by highlighting the importance of using human potential and time for better causes than designing one more date picker.

Mystery Theme/Other Highlights

When Vitaly first got on stage, one of the things he asked the audience to keep an eye out for was an overarching mystery theme that connects all the talks. At the end of the conference, he finally revealed the answer: the theme was connected to the city of Antwerp!

Where does the name "Antwerp" come from? “Hand werpen” or “to throw a hand”. Once upon a time, there was a giant that collected money from everyone passing the river. One time, a soldier came and just cut off the hand of this giant and threw it to the other side, liberating the city. So, the story and the theme were “legends.” For instance, Molly Hellmuth included Bigfoot (Sasquatch), Stéphanie mentioned Prometheus, Nick added the word "myth" to one of his slides, Oliver applied a typeface usually used in fairy tales, Christine mentioned Sisyphus and Kat talked about Pandora’s box.

My Very Own Avatar

One more awesome thing that happened thanks to attending this conference is that I got a great surprise from the Smashing team! I won the hidden challenge 'Best Sketch Notes', and I have been gifted a personalized avatar created by Smashing Magazine’s illustrator, Ricardo.

Full Agenda

There were other great talks — I’ll be sure to watch the recordings! For anyone asking, here is the full agenda of the conference.

A huge thanks again to all of the organizers! You can check out all the current and upcoming Smashing conferences planned on the SmashingConf website anytime.

Saving The Best For Last: Photos And Recordings

The one-and-only Marc Thiele captured in-person vibes at the event — you can see the stunning, historic Bourla venue it took place in and how memorable it all must have been for the attendees! 🧡

For those who couldn’t make it in person and are curious to watch the talks, well, I have good news for you! The recordings have been recently published — you can watch them over here:


Thank you for reading! I hope you enjoyed reading this as much as I did writing it! See you at the next design & UX SmashingConf in Antwerp, maybe?

]]>
hello@smashingmagazine.com (Krisztina Szerovay)
<![CDATA[Event Calendars For Web Made Easy With These Commercial Options]]> https://smashingmagazine.com/2024/03/event-calendars-web-commercial-options/ https://smashingmagazine.com/2024/03/event-calendars-web-commercial-options/ Tue, 12 Mar 2024 12:00:00 GMT This article is a sponsored by Bryntum

Need a full-fledged calendar for scheduling events on your website? Just download one of these and get started. We have curated a collection of some of the best calendar components available in the market that are worth the investment for your next big project. Whether you want to simply use their default set-up without touching the source code or roll up your sleeves to dive deep and customize everything to your heart’s content, these calendars have something to offer for everyone.

Bryntum Calendar

Convenience is the theme. The Bryntum Calendar can be used with vanilla JavaScript or popular JavaScript frameworks like React, Angular, and Vue, giving you considerable flexibility for easy integration into a wide range of projects. The component seamlessly connects with the other Bryntum power widgets, such as the Data Grid.

The out-of-the-box calendar has a modern and clean design with crisp icons. It renders nicely and executes fast. The user controls have smooth transitions and resize well for different screen sizes. Drag and drop for the events is supported.

Need more than the five nifty themes the calendar comes with? The customization possibilities, using CSS/Sass, go beyond just the calendar views — day, week, month, year, and agenda — and include ways to restyle tooltips and user controls with a lot of liberty and just a few keystrokes. Extensive configurations are at hand to update both the impression and functions of the calendar.

The Bryntum Calendar features include multi-user assigned tasks and time zone support, making it ideal for global teams to collaborate and plan together. For potent project management, there’s the ingenious resource view to monitor at a glance every member’s agenda and see who is free to be scheduled.

All configuration options and features are well documented with interactive and customizable live examples, and you will also find helpful integration guides for Outlook and Google Calendar. Additionally, there’s an impressive collection of customized calendar examples that you can browse and choose from.

A 45-day free trial is available, which includes over thirty examples for you to get started with. If you have any queries regarding the pricing and license, you can contact Bryntum at https://bryntum.com/contact/. Bryntum also has an easy-to-use technical support forum for its clients and offers paid professional services like migration assistance for their products.

KendoReact Scheduler

Whatever you need, it’s in the KendoReact UI bundle, including an easy-to-integrate calendar with a contemporary design for React applications. Options for other frameworks, such as jQuery and .NET MVC, are also available depending on which plan you purchase.

The Calendar/Scheduler, as it is, has a nice, comprehensible look and is easy to navigate. It is WCAG compliant and supports accessibility, something that screen reader users and keyboard navigators will appreciate.

All the basic calendar features are impeccably executed, including showing different calendar views, like monthly, and creating recurring events and timelines. For easy customization, it comes with some built-in themes, or you can build your own with custom Sass variables.

For global teams, KendoReact Scheduler does support different timezones, as well as globalization support for other languages.

A comprehensive feature and API documentation, with built-in sample code editors, is available. To learn more about the pricing, you can request a quote at https://www.telerik.com/purchase/request-a-quote. There’s technical support for customers in the form of a ticket system and forums. A 30-day free trial that includes technical support can be opted for.

DevExtreme Scheduler

Apt to dive straight into work, DevExtreme Scheduler has distraction-free but detailed features that now include work shifts. It is available for Angular, React, Vue, and jQuery.

You’ll find a standard collection of demos on the website for you to preview and make up your mind. Additionally, there’s a comprehensive theme builder for customization. For pinpoint touchups, the CSS can be updated.

The calendar is responsive to mobile screens and includes a drag-and-drop feature, timezone support, and Google Calendar integration.

You can take advantage of a 30-day free trial and a 60-day money-back guarantee. For queries, DevExtreme can be reached at https://js.devexpress.com/Support/. Ticket-based technical support is also available.

Mobiscroll Calendar

Responsiveness is at the forefront. If you are meticulous about how the calendar should look in different screen sizes, go through Mobiscroll’s documentation, for they have demo presets for different sizes in both desktop and mobile versions.

The calendar is available in JavaScript, jQuery, Angular, React, and Vue. The default calendar has a minimalist look and comes with a few base themes. Its appearance, including that of the tooltips, can be customized in CSS.

The calendar supports a drag-and-drop function for ease of use and different time zones for global collaborations. It also provides third-party calendar integration, with Google Calendar, for instance, which is available only during the non-trial period.

A 90-day trial is available, and there’s also a 30-day money-back guarantee. For queries on license and fee, Mobiscroll can be contacted at https://mobiscroll.com/contact. Both standard and priority customer support are provided, along with a community forum for any questions.

Webix Scheduler

Just drop this in and get started. Webix Scheduler is a JavaScript calendar widget that’s easy to fit into your website. The default look is a pastel theme — there’s a full-screen live demo — but it is customizable via the preset skins, API, or the theme builder.

It includes all the basic calendar views and functionality. New events can be added using drag-and-drop. There’s a collection of JavaScript coding samples with a live code editor for you to test and learn from.

Webix Scheduler comes with standard online documentation for the API and user guide. There’s a 30-day free trial you can sign up for. For more information, Webix can be contacted at https://webix.com/contact-us/. For clients, technical support is available from ticket-based systems to live chats, depending on the plan purchased. There’s also a community forum to ask questions.

And that’s the end of the calendar round-up. If you want to share with our community a commercial web calendar (or an open-sourced one) that you’ve tried and loved, let us know in the comments. Happy scheduling!

]]>
hello@smashingmagazine.com (Preethi Sam)
<![CDATA[Success At Scale: Last Chance For Pre-Sale Price]]> https://smashingmagazine.com/2024/03/success-at-scale-book-pre-release/ https://smashingmagazine.com/2024/03/success-at-scale-book-pre-release/ Fri, 08 Mar 2024 16:00:00 GMT Get your copy and save now!]]> We love practical books that focus on failures and success stories. And we’ve finally sent our latest book, Addy Osmani’s Success at Scale is at the printer. That means you’ll soon have another wonderful, practical book in your hands. It also means the price of the book is about to go up. So pre-order your copy now to save on the cover price by the 11th of March, 2024!

Reviews And Testimonials
“This book reveals in its pages the collective wisdom of frontend engineers working on global web projects. It’s the perfect way to enhance your web applications’ potential by optimizing performance with practical tips straight from the trenches.”

Luca Mezzalira, Principal Serverless Specialist Solutions Architect at Amazon Web Services (AWS) and O’Reilly author
Success at Scale, masterfully curated by Addy Osmani, serves as an invaluable compass for the aspiring developers navigating the complex waters of web development. It’s more than a book; it’s a transmission of wisdom, guiding junior developers towards the shores of big tech companies. With its in-depth case studies and focus on performance, capabilities, accessibility, and developer experience, it prepares the next generation of tech talent to not just participate in, but to shape the future of digital innovation.”

Jerome Hardaway, Engineering AI products at Microsoft
## Contents

Success at Scale is a curated collection of best-practice case studies capturing how production sites of different sizes tackle performance, accessibility, capabilities, and developer experience at scale. Case studies are from industry experts from teams at Instagram, Shopify, Netflix, eBay, Figma, Spotify, Wix, Lyft, LinkedIn, and many more. Guidance that stands the test of time.

Join Addy Osmani, your curator, as we dive into a nuanced look at several key topics that will teach you tips and tricks that may help you optimize your own sites. The book also includes short interviews with contributors on what additional lessons, challenges, and tips they have to share some time after the case studies were written.

  • Performance includes examples of measuring, budgeting, optimizing, and monitoring performance, in addition to tips for building a performance culture.
  • Capabilities is about bridging the gap between native capabilities and the modern web. You’ll explore web apps, native apps, and progressive web applications.
  • Accessibility makes web apps viable for diverse users, including people with temporary or permanent disabilities. Most of us will have a disability at some point in our lives, and these case studies show how we can make the web work for all of us.
  • Developer Experience is about building a project environment and culture that encourage support, growth, and problem-solving within teams. Strong teams build great projects!
We’re Trying Out Something New

In an effort to conserve resources here at Smashing, we’re trying something new with Success at Scale. The printed book will be 304 pages, and we’ll make an expanded PDF version available to everyone who purchases a print book. This accomplishes a few good things:

  • We will use less paper and materials because we are making a smaller printed book;
  • We’ll use fewer resources in general to print, ship, and store the books, leading to a smaller carbon footprint; and
  • Keeping the book at more manageable size means we can continue to offer free shipping on all Smashing orders!

Smashing Books have always been printed with materials from FSC Certified forests. We are committed to finding new ways to conserve resources while still bringing you the best possible reading experience.

About the Author

Addy Osmani is an engineering leader working on Google Chrome. He leads up Chrome’s Developer Experience organization, helping reduce the friction for developers to build great user experiences.

Technical Details
  • ISBN: 978-3-910835-00-9 (print)
  • Quality hardcover, 304 pages, stitched binding, ribbon page marker.
  • Free worldwide airmail shipping from Germany mid March 2024.
  • eBook available for download as PDF, ePUB, and Amazon Kindle.
  • Pre-order the book.
Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Add to cart $44

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Add to cart $44

Interface Design Checklists

100 practical cards for common interface design challenges.

Add to cart $39

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Modern CSS Tooltips And Speech Bubbles (Part 2)]]> https://smashingmagazine.com/2024/03/modern-css-tooltips-speech-bubbles-part2/ https://smashingmagazine.com/2024/03/modern-css-tooltips-speech-bubbles-part2/ Fri, 08 Mar 2024 12:00:00 GMT I hope you were able to spend time getting familiar with the techniques we used to create tooltips in Part 1 of this quick two-parter. Together, we were able to create a flexible tooltip pattern that supports different directions, positioning, and coloration without adding any complexity whatsoever to the HTML.

We leveraged the CSS border-image property based on another article I wrote while applying clever clip-path tricks to control the tooltip’s tail. If you haven’t checked out that article or the first part of this series, please do because we’re jumping straight into things today, and the context will be helpful.

So far, we’ve looked at tooltip shapes with triangular tails with the option to have rounded corners and gradient coloration. Sure, we were able to do lots of things but there are many other — and more interesting — shapes we can accomplish. That’s what we’re doing in this article. We will handle cases where a tooltip may have a border and different tail shapes, still with the least amount of markup and most amount of flexibility.

Before we start, I want to remind you that I’ve created a big collection of 100 tooltip shapes. I said in Part 1 that we would accomplish all of them in these two articles. We covered about half of them in Part 1, so let’s wrap things up here in Part 2.

The HTML

Same as before!

<div class="tooltip">Your text content goes here</div>

That’s the beauty of what we’re making: We can create many, many different tooltips out of the same single HTML element without changing a thing.

Tooltips With Borders

Adding a border to the tooltips we made in Part 1 is tricky. We need the border to wrap around both the main element and the tail in a continuous fashion so that the combined shape is seamless. Let’s start with the first simple shape we created in Part 1 using only two CSS properties:

.tooltip {
  /* tail dimensions */
  --b: 2em; /* base */
  --h: 1em; /* height*/

  /* tail position */
  --p: 50%;

  border-image: fill 0 // var(--h)
    conic-gradient(#CC333F 0 0);
  clip-path: 
    polygon(0 100%, 0 0,100% 0, 100% 100%,
      min(100%, var(--p) + var(--b) / 2) 100%,
      var(--p) calc(100% + var(--h)),
      max(0%, var(--p) - var(--b) / 2) 100%);
}

Here’s the demo. You can use the range slider to see how flexible it is to change the tail position by updating the --p variable.

This probably is not that a big deal in most cases. A few pixels aren’t a glaring visual issue, but you can decide whether or not it meets your needs. Me? I’m a perfectionist, so let’s try to fix this minor detail even if the code will get a little more complex.

We need to do some math that requires trigonometric functions. Specifically, we need to change some of the variables because we cannot get what we want with the current setup. Instead of using the base variable for the tail’s dimensions, I will consider an angle. The second variable that controls the height will remain unchanged.

The relationship between the base (--b) and the angle (--a) is equal to B = 2*H*tan(A/2). We can use this to update our existing code:

.tooltip {
  /* tail dimensions */
  --a: 90deg; /* angle */
  --h: 1em; /* height */

  --p: 50%; /* position */
  --t: 5px; /* border thickness */

  border-image: fill 0 // var(--h)
    conic-gradient(#5e412f 0 0); /* the border color */
  clip-path: 
    polygon(0 100%, 0 0, 100% 0, 100% 100%,
      min(100%, var(--p) + var(--h) * tan(var(--a) / 2)) 100%,
      var(--p) calc(100% + var(--h)),
      max(0%, var(--p) - var(--h) * tan(var(--a) / 2)) 100%);
  position: relative;
  z-index: 0;
}
.tooltip:before {
  content: "";
  position: absolute;
  inset: var(--t) 0;
  border-image: fill 0 / 0 var(--t) / var(--h) 0
    conic-gradient(#CC333F 0 0); /* the background color */
  clip-path: inherit;
}

Nothing drastic has changed. We introduced a new variable to control the border thickness (--t) and updated the clip-path property with the new variables that define the tail’s dimensions.

Now, all the work will be done on the pseudo-element’s clip-path property. It will no longer inherit the main element’s value, but it does need a new value to get the correct border thickness around the tail. I want to avoid getting deep into the complex math behind all of this, so here is the implementation:

clip-path: 
  polygon(0 100%, 0 0, 100% 0, 100% 100%,
    min(100% - var(--t), var(--p) + var(--h)tan(var(--a)/2) - var(--t)tan(45deg - var(--a) / 4)) 100%,
    var(--p) calc(100% + var(--h) + var(--t)(1 - 1/sin(var(--a)/2))),
    max(var(--t), var(--p) - var(--h)tan(var(--a)/2) + var(--t)*tan(45deg - var(--a)/4)) 100%);

It looks complex because it is! You don’t really need to understand the formulas since all you have to do is adjust a few variables to control everything.

Now, finally, our tooltip is perfect. Here is an interactive demo where you can adjust the position and the thickness. Don’t forget to also play with the dimension of the tail as well.

Let’s move on to the rounded corners. We can simply use the code we created in the previous article. We duplicate the shape using a pseudo-element and make a few adjustments for perfect alignment and a correct border thickness.

The reason I’m not going into details for this one is to make the point that you don’t have to remember all the various use cases and code snippets by heart. The goal is to understand the actual concepts we are using to build the tooltips, like working with border-image, clip-path(), gradients, and math functions.

I can’t even remember most of the code I write after it’s done, but it’s no issue since all I have to do is copy and paste then adjust a few variables to get the shape I want. That’s the benefit of leveraging modern CSS features — they handle a lot of the work for us.

Border-Only Tooltips

I’d like to do one more exercise with you, this time making a tooltip with no fill but still with a full border around the entire shape. So far, we’ve been able to reuse a lot of the code we put together in Part 1, but we’re going to need new tricks to pull this one off.

The goal is to establish a transparent background while maintaining the border. We’ll start without rounded corners for the moment.

See how we’re going to be working with gradients again? I could have used a single color to produce a solid, single-color border, but I put a hard stop in there to demonstrate the idea. We’ll be able to create even more variations, thanks to this little detail, like using multiple colors, different color stops, and even different types of gradients.

You’ll see that the code looks fairly straightforward:

.tooltip {
  /* tail dimension */
  --a: 90deg; /* angle */
  --h: 1em; /* height */

  --p: 50%; /* tail position */
  --b: 7px; /* border thickness */

  position: relative;
}
.tooltip:before {
  content: "";
  position: absolute;
  inset: 0 0 calc(-1*var(--h));
  clip-path: polygon( ... ); /* etc. */
  background: linear-gradient(45deg, #cc333f 50%, #79bd9a 0); /* colors */
}

We’re using pseudo element again, this time with a clip-path to establish the shape. From there, we set a linear-gradient() on the background.

I said the code “looks” very straightforward. Structurally, yes. But I purposely put a placeholder clip-path value because that’s the complicated part. We needed to use a 16-point polygon and math formulas, which honestly gave me big headaches.

That’s why I turn to my online generator in most cases. After all, what’s the point of everyone spending hours trying to suss out which formulas to use if math isn’t your thing? May as well use the tools that are available to you! But note how much better it feels to use those tools when you understand the concepts that are working under the hood.

OK, let’s tackle rounded corners:

For this one, we are going to rely on not one, but two pseudo-elements, :before and :after. One will create the rounded shape, and the other will serve as the tail.

The above figure illustrates the process for creating the rounded part with the :before pseudo-element. We first start with a simple rectangular shape that’s filled with a conic gradient containing three colors. Then, we mask the shape so that the inner area is transparent. After that, we use a clip-path to cut out a small part of the bottom edge to reserve space for the tail we’ll make with the :after pseudo-element.

/* the main element */
.tooltip {
  /* tail dimension */
  --a: 90deg; /* angle */
  --h: 1em; /* height */

  --p: 50%; /* tail position  */
  --b: 7px; /* border thickness */
  --r: 1.2em; /* the radius */

  position: relative;
  z-index: 0;
}

/* both pseudo-elements */
.tooltip:before,
.tooltip:after {
  content: "";
  background: conic-gradient(#4ECDC4 33%, #FA2A00 0 66%, #cf9d38 0);  /* the coloration */
  inset: 0;
  position: absolute;
  z-index: -1;
}

/* the rounded rectangle */
.tooltip:before {
  background-size: 100% calc(100% + var(--h));
  clip-path: polygon( ... );
  mask: linear-gradient(#000 0 0) content-box, linear-gradient(#000 0 0);
  mask-composite: exclude;
  padding: var(--b);
}

/* the tail */
.tooltip:after {
  content: "";
  position: absolute;
  bottom: calc(-1 * var(--h));
  clip-path: polygon( ... );
}

Once again, the structure is not all that complex and the clip-path value is the tough part. As I said earlier, there’s really no need to get deep into an explanation on it when we can use the points from an online generator to get the exact shape we want.

The new piece that is introduced in this code is the mask property. It uses the same technique we covered in yet another Smashing article I wrote. Please read that for the full context of how mask and mask-composite work together to trim the transparent area. That’s the first part of your homework after finishing this article.

Fun Tail Shapes

We’ve covered pretty much every single one of the tooltips available in my collection. The only ones we have not specifically touched use a variety of different shapes for the tooltip’s tail.

All of the tooltips we created in this series used a simple triangle-shaped tail, which is a standard tooltip pattern. Sure, we learned how to change its dimensions and position, but what if we want a different sort of tooltip? Maybe we want something fancier or something that looks more like a speech or thought bubble.

If the rounded corners in the last section are the first part of your homework, then the next part is to try making these tail variations yourself using what we have learned together in these two articles. You can always find the code over at my collection for reference and hints. And, leave a comment here if you have any additional questions — I’m happy to help!

Conclusion

I hope you enjoyed this little series because I sure had a blast writing it. I mean, look at all of the things we accomplished in a relatively short amount of space: simple rectangular tooltips, rounded corners, different tail positions, solid and gradient backgrounds, a bunch of border options, and finally, custom shapes for the tail.

I probably went too far with how many types of tooltips we could make — there are 100 in total when you count them up! But it goes to show just how many possibilities there are, even when we’re always using the same single element in the HTML.

And, it’s great practice to consider all of the different use cases and situations you may run into when you need a tooltip component. Keep these in your back pocket for when you need them, and use my collection as a reference, for inspiration, or as a starting point for your own work!

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[The End Of My Gatsby Journey]]> https://smashingmagazine.com/2024/03/end-of-gatsby-journey/ https://smashingmagazine.com/2024/03/end-of-gatsby-journey/ Wed, 06 Mar 2024 08:00:00 GMT A fun fact about me is that my birthday is on Valentine’s Day. This year, I wanted to celebrate by launching a simple website that lets people receive anonymous letters through a personal link. The idea came up to me at the beginning of February, so I wanted to finish the project as soon as possible since time was of the essence.

Having that in mind, I decided not to do SSR/SSG with Gatsby for the project but rather go with a single-page application (SPA) using Vite and React — a rather hard decision considering my extensive experience with Gatsby. Years ago, when I started using React and learning more and more about today’s intricate web landscape, I picked up Gatsby.js as my render framework of choice because SSR/SSG was necessary for every website, right?

I used it for everything, from the most basic website to the most over-engineered project. I absolutely loved it and thought it was the best tool, and I was incredibly confident in my decision since I was getting perfect Lighthouse scores in the process.

The years passed, and I found myself constantly fighting with Gatsby plugins, resorting to hacky solutions for them and even spending more time waiting for the server to start. It felt like I was fixing more than making. I even started a series for this magazine all about the “Gatsby headaches” I experienced most and how to overcome them.

It was like Gatsby got tougher to use with time because of lots of unaddressed issues: outdated dependencies, cold starts, slow builds, and stale plugins, to name a few. Starting a Gatsby project became tedious for me, and perfect Lighthouse scores couldn’t make up for that.

So, I’ve decided to stop using Gatsby as my go-to framework.

To my surprise, the Vite + React combination I mentioned earlier turned out to be a lot more efficient than I expected while maintaining almost the same great performance measures as Gatsby. It’s a hard conclusion to stomach after years of Gatsby’s loyalty.

I mean, I still think Gatsby is extremely useful for plenty of projects, and I plan on talking about those in a bit. But Gatsby has undergone a series of recent unfortunate events after Netlify acquired it, the impacts of which can be seen in down-trending results from the most recent State of JavaScript survey. The likelihood of a developer picking up Gatsby again after using it for other projects plummeted from 89% to a meager 38% between 2019 and 2022 alone.

Although Gatsby was still the second most-used rendering framework as recently as 2022 — we are still expecting results from the 2023 survey — my prediction is that the decline will continue and dip well below 38%.

Seeing as this is my personal farewell to Gatsby, I wanted to write about where, in my opinion, it went wrong, where it is still useful, and how I am handling my future projects.

Gatsby: A Retrospective

Kyle Mathews started working on what would eventually become Gatsby in late 2015. Thanks to its unique data layer and SSG approach, it was hyped for success and achieved a $3.8 million funding seed round in 2018. Despite initial doubts, Gatsby remained steadfast in its commitment and became a frontrunner in the Jamstack community by consistently enhancing its open-source framework and bringing new and better changes with each version.

So... where did it all go wrong?

I’d say it was the introduction of Gatsby Cloud in 2019, as Gatsby aimed at generating continuous revenue and solidifying its business model. Many (myself included) pinpoint Gatsby’s downfall to Gatsby Cloud, as it would end up cutting resources from the main framework and even making it harder to host in other cloud providers.

The core framework had been optimized in a way that using Gatsby and Gatsby Cloud together required no additional hosting configurations, which, as a consequence, made deployments in other platforms much more difficult, both by neglecting to provide documentation for third-party deployments and by releasing exclusive features, like incremental builds, that were only available to Gatsby users who had committed to using Gatsby Cloud. In short, hosting projects on anything but Gatsby Cloud felt like a penalty.

As a framework, Gatsby lost users to Next.js, as shown in both surveys and npm trends, while Gatsby Cloud struggled to compete with the likes of Vercel and Netlify; the former acquiring Gatsby in February of 2023.

“It [was] clear after a while that [Gatsby] weren’t winning the framework battle against Vercel, as a general purpose framework [...] And they were probably a bit boxed in by us in terms of building a cloud platform.”

Matt Biilmann, Netlify CEO

The Netlify acquisition was the last straw in an already tumbling framework haystack. The migration from Gatsby Cloud to Netlify wasn’t pretty for customers either; some teams were charged 120% more — or had incurred extraneous fees — after converting from Gatsby Cloud to Netlify, even with the same Gatsby Cloud plan they had! Many key Gatsby Cloud features, specifically incremental builds that reduced build times of small changes from minutes to seconds, were simply no longer available in Netlify, despite Kyle Mathews saying they would be ported over to Netlify:

“Many performance innovations specifically for large, content-heavy websites, preview, and collaboration workflows, will be incorporated into the Netlify platform and, where relevant, made available across frameworks.”

— Kyle Mathews

However, in a Netlify forum thread dated August 2023, a mere six months after the acquisition, a Netlify support engineer contradicted Mathews’s statement, saying there were no plans to add incremental features in Netlify.

That left no significant reason to remain with Gatsby. And I think this comment on the same thread perfectly sums up the community’s collective sentiment:

“Yikes. Huge blow to Gatsby Cloud customers. The incremental build speed was exactly why we switched from Netlify to Gatsby Cloud in the first place. It’s really unfortunate to be forced to migrate while simultaneously introducing a huge regression in performance and experience.”

Netlify’s acquisition also brought about a company restructuring that substantially reduced the headcount of Gatsby’s engineering team, followed by a complete stop in commit activities. A report in an ominous tweet by Astro co-founder Fred Schott further exacerbated concerns about Gatsby’s future.

Lennart Jörgens, former full-stack developer at Gatsby and Netlify, replied, insinuating there was only one person left after the layoffs:

You can see all these factors contributing to Gatsby’s usage downfall in the 2023 Stack Overflow survey.

Biilmann addressed the community’s concerns about Gatsby’s viability in an open issue from the Gatsby repository:

“While we don’t plan for Gatsby to be where the main innovation in the framework ecosystem takes place, it will be a safe, robust and reliable choice to build production quality websites and e-commerce stores, and will gain new powers by ways of great complementary tools.”

— Matt Biilmann

He also shed light on Gatsby’s future focus:

  • “First, ensure stability, predictability, and good performance.
  • Second, give it new powers by strong integration with all new tooling that we add to our Composable Web Platform (for more on what’s all that, you can check out our homepage).
  • Third, make Gatsby more open by decoupling some parts of it that were closely tied to proprietary cloud infrastructure. The already-released Adapters feature is part of that effort.”

— Matt Biilmann

So, Gatsby gave up competing against Next.js on innovation, and instead, it will focus on keeping the existing framework clean and steady in its current state. Frankly, this seems like the most reasonable course of action considering today’s state of affairs.

Why Did People Stop Using Gatsby?

Yes, Gatsby Cloud ended abruptly, but as a framework independent of its cloud provider, other aspects encouraged developers to look for alternatives to Gatsby.

As far as I am concerned, Gatsby’s developer experience (DX) became more of a burden than a help, and there are two main culprits where I lay the blame: dependency hell and slow bundling times.

Dependency Hell

Go ahead and start a new Gatsby project:

gatsby new

After waiting a couple of minutes you will get your brand new Gatsby site. You’d rightly expect to have a clean slate with zero vulnerabilities and outdated dependencies with this out-of-the-box setup, but here’s what you will find in the terminal once you run npm audit:

18 vulnerabilities (11 moderate, 6 high, 1 critical)

That looks concerning — and it is — not so much from a security perspective but as an indication of decaying DX. As a static site generator (SSG), Gatsby will, unsurprisingly, deliver a static and safe site that (normally) doesn’t have access to a database or server, making it immune to most cyber attacks. Besides, lots of those vulnerabilities are in the developer tools and never reach the end user. Alas, relying on npm audit to assess your site security is a naive choice at best.

However, those vulnerabilities reveal an underlying issue: the whopping number of dependencies Gatsby uses is 168(!) at the time I’m writing this. For the sake of comparison, Next.js uses 16 dependencies. A lot of Gatsby’s dependencies are outdated, hence the warnings, but trying to update them to their latest versions will likely unleash a dependency hell full of additional npm warnings and errors.

In a related subreddit from 2022, a user asked, “Is it possible to have a Gatsby site without vulnerabilities?”

The real answer is disappointing, but as of March 2024, it remains true.

A Gatsby site should work completely fine, even with that many dependencies, and extending your project shouldn’t be a problem, whether through its plugin ecosystem or other packages. However, when trying to upgrade any existing dependency you will find that you can’t! Or at least you can’t do it without introducing breaking changes to one of the 168 dependencies, many of which rely on outdated versions of other libraries that also cannot be updated.

It’s that inception-like roundabout of dependencies that I call dependency hell.

Slow Build And Development Times

To me, one of the most important aspects of choosing a development tool is how comfortable it feels to use it and how fast it is to get a project up and running. As I’ve said before, users don’t care or know what a “tech stack” is or what framework is in use; they want a good-looking website that helps them achieve the task they came for. Many developers don’t even question what tech stack is used on each site they visit; at least, I hope not.

With that in mind, choosing a framework boils down to how efficiently you can use it. If your development server constantly experiences cold starts and crashes and is unable to quickly reflect changes, that’s a poor DX and a signal that there may be a better option.

That’s the main reason I won’t automatically reach for Gatsby from here on out. Installation is no longer a trivial task; the dependencies are firing off warnings, and it takes the development server upwards of 30 seconds to boot. I’ve even found that the longer the server runs, the slower it gets; this happens constantly to me, though I admittedly have not heard similar gripes from other developers. Regardless, I get infuriated having to constantly restart my development server every time I make a change to gatsby-config.js, gatsby-node.js files, or any other data source.

This new reality is particularly painful, knowing that a Vite.js + React setup can start a server within 500ms thanks to the use of esbuild.

Running gatsby build gets worse. Build times for larger projects normally take some number of minutes, which is understandable when we consider all of the pages, data sources, and optimizations Gatsby does behind the scenes. However, even a small content edit to a page triggers a full build and deployment process, and the endless waiting is not only exhausting but downright distracting for getting things done. That’s what incremental builds were designed to solve and the reason many people switched from Netlify to Gatsby Cloud when using Gatsby. It’s a shame we no longer have that as an available option.

The moment Gatsby Cloud was discontinued along with incremental builds, the incentives for continuing to use Gatsby became pretty much non-existent. The slow build times are simply too costly to the development workflow.

What Gatsby Did Awesomely Well

I still believe that Gatsby has awesome things that other rendering frameworks don’t, and that’s why I will keep using it, albeit for specific cases, such as my personal website. It just isn’t my go-to framework for everything, mainly because Gatsby (and the Jamstack) wasn’t meant for every project, even if Gatsby was marketed as a general-purpose framework.

Here’s where I see Gatsby still leading the competition:

  • The GraphQL data layer.
    In Gatsby, all the configured data is available in the same place, a data layer that’s easy to access using GraphQL queries in any part of your project. This is by far the best Gatsby feature, and it trivializes the process of building static pages from data, e.g., a blog from a content management system API or documentation from Markdown files.
  • Client performance.
    While Gatsby’s developer experience is questionable, I believe it delivers one of the best user experiences for navigating a website. Static pages and assets deliver the fastest possible load times, and using React Router with pre-rendering of proximate links offers one of the smoothest experiences navigating between pages. We also have to note Gatsby’s amazing image API, which optimizes images to all extents.
  • The plugin ecosystem (kinda).
    There is typically a Gatsby plugin for everything. This is awesome when using a CMS as a data source since you could just install its specific plugin and have all the necessary data in your data layer. However, a lot of plugins went unmaintained and grew outdated, introducing unsolvable dependency issues that come with dependency hell.

I briefly glossed over the good parts of Gatsby in contrast to the bad parts. Does that mean that Gatsby has more bad parts? Absolutely not; you just won’t find the bad parts in any documentation. The bad parts also aren’t deal breakers in isolation, but they snowball into a tedious and lengthy developer experience that pushes away its advocates to other solutions or rendering frameworks.

Do We Need SSR/SSG For Everything?

I’ll go on record saying that I am not replacing Gatsby with another rendering framework, like Next.js or Remix, but just avoiding them altogether. I’ve found they aren’t actually needed in a lot of cases.

Think, why do we use any type of rendering framework in the first place? I’d say it’s for two main reasons: crawling bots and initial loading time.

SEO And Crawling Bots

Most React apps start with a hollow body, only having an empty <div> alongside <script> tags. The JavaScript code then runs in the browser, where React creates the Virtual DOM and injects the rendered user interface into the browser.

Over slow networks, users may notice a white screen before the page is actually rendered, which is just mildly annoying at best (but devastating at worst).

However, search engines like Google and Bing deploy bots that only see an empty page and decide not to crawl the content. Or, if you are linking up a post on social media, you may not get OpenGraph benefits like a link preview.

<body>
  <div id="root"></div>

  <script type="module" src="/src/main.tsx"></script>
</body>

This was the case years ago, making SSR/SSG necessary for getting noticed by Google bots. Nowadays, Google can run JavaScript and render the content to crawl your website. While using SSR or SSG does make this process faster, not all bots can run JavaScript. It’s a tradeoff you can make for a lot of projects and one you can minimize on your cloud provider by pre-rendering your content.

Initial Loading Time

Pre-rendered pages load faster since they deliver static content that relieves the browser from having to run expensive JavaScript.

It’s especially useful when loading pages that are behind authentication; in a client-side rendered (CSR) page, we would need to display a loading state while we check if the user is logged in, while an SSR page can perform the check on the server and send back the correct static content. I have found, however, that this trade-off is an uncompelling argument for using a rendering framework over a CSR React app.

In any case, my SPA built on React + Vite.js gave me a perfect Lighthouse score for the landing page. Pages that fetch data behind authentication resulted in near-perfect Core Web Vitals scores.

What Projects Gatsby Is Still Good For

Gatsby and rendering frameworks are excellent for programmatically creating pages from data and, specifically, for blogs, e-commerce, and documentation.

Don’t be disappointed, though, if it isn’t the right tool for every use case, as that is akin to blaming a screwdriver for not being a good hammer. It still has good uses, though fewer than it could due to all the reasons we discussed before.

But Gatsby is still a useful tool. If you are a Gatsby developer the main reason you’d reach for it is because you know Gatsby. Not using it might be considered an opportunity cost in economic terms:

“Opportunity cost is the value of the next-best alternative when a decision is made; it’s what is given up.”

Imagine a student who spends an hour and $30 attending a yoga class the evening before a deadline. The opportunity cost encompasses the time that could have been dedicated to completing the project and the $30 that could have been used for future expenses.

As a Gatsby developer, I could start a new project using another rendering framework like Next.js. Even if Next.js has faster server starts, I would need to factor in my learning curve to use it as efficiently as I do Gatsby. That’s why, for my latest project, I decided to avoid rendering frameworks altogether and use Vite.js + React — I wanted to avoid the opportunity cost that comes with spending time learning how to use an “unfamiliar” framework.

Conclusion

So, is Gatsby dead? Not at all, or at least I don’t think Netlify will let it go away any time soon. The acquisition and subsequent changes to Gatsby Cloud may have taken a massive toll on the core framework, but Gatsby is very much still breathing, even if the current slow commits pushed to the repo look like it’s barely alive or hibernating.

I will most likely stick to Vite.js + React for my future endeavors and only use rendering frameworks when I actually need them. What are the tradeoffs? Sacrificing negligible page performance in favor of a faster and more pleasant DX that maintains my sanity? I’ll take that deal every day.

And, of course, this is my experience as a long-time Gatsby loyalist. Your experience is likely to differ, so the mileage of everything I’m saying may vary depending on your background using Gatsby on your own projects.

That’s why I’d love for you to comment below: if you see it differently, please tell me! Is your current experience using Gatsby different, better, or worse than it was a year ago? What’s different to you, if anything? It would be awesome to get other perspectives in here, perhaps from someone who has been involved in maintaining the framework.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[Modern CSS Tooltips And Speech Bubbles (Part 1)]]> https://smashingmagazine.com/2024/03/modern-css-tooltips-speech-bubbles-part1/ https://smashingmagazine.com/2024/03/modern-css-tooltips-speech-bubbles-part1/ Fri, 01 Mar 2024 12:00:00 GMT In a previous article, we explored ribbon shapes and different ways to approach them using clever combinations of CSS gradients and clip-path(). This time, I’d like to explore another shape, one that you’ve likely had to tackle at least once in your front-end life: tooltips. You know what we’re talking about, those little things that look like speech bubbles from comic books. They’re everywhere in the wild, from a hover effect for buttons to the text messaging app on your phone.

The shapes may look easy to make in CSS at first glance, but it always ends with a lot of struggles. For example, how do you adjust the position of the tail to indicate whether the tooltip is coming from a left, right, or center position? There are plenty of considerations to take into account when making tooltips — including overflowage, collision detection, and semantics — but it’s the shape and direction of the tail that I want to focus on because I often see inflexible fixed units used to position them.

Forget what you already know about tooltips because in this article, we will start from zero, and you will learn how to build a tooltip with minimal markup powered by modern CSS that provides flexibility to configure the component by adjusting CSS variables. We are not going to build one or two shapes, but… 100 different shapes!

That may sound like we’re getting into a super-long article, but actually, we can easily get there by adjusting a few values. In the end, you will have a back pocket full of CSS tricks that can be combined to create any shape you want.

And guess what? I’ve already created an online collection of 100 different tooltip shapes where you can easily copy and paste the code for your own use, but stay with me. You’re going to want to know the secret to unlocking hundreds of possibilities with the least possible code.

We’ll start with the shapes themselves, discussing how we can cut out the bubble and tail by combining CSS gradients and clipping. Then, we’ll pick things back up in a second article dedicated to improving another common approach to tooltips using borders and custom shapes.

The HTML

We’re only working with a single element:

<div class="tooltip">Your text content goes here</div>

That’s the challenge: Create hundreds of tooltip variations in CSS with only a single element to hook into in the HTML.

A Simple Tooltip Tail

I’m going to skip right over the basic rectangular shape; you know how to set a width and height (or aspect-ratio) on elements. Let’s start with the simplest shape for the tooltip’s tail, one that can be accomplished with only two CSS properties:

.tooltip {
  /* tail dimension */
  --b: 2em; /* base */
  --h: 1em; /* height*/

  border-image: fill 0 // var(--h)
    conic-gradient(#CC333F 0 0); /* the color  */
  clip-path: 
    polygon(0 100%, 0 0, 100% 0, 100% 100%,
      calc(50% + var(--b) / 2) 100%,
      50% calc(100% + var(--h)),
      calc(50% - var(--b) / 2) 100%);
}

The border-image property creates an “overflowing color” while clip-path defines the shape of the tooltip with polygon() coordinates. (Speaking of border-image, I wrote a deep-dive on it and explain how it might be the only CSS property that supports double slashes in the syntax!)

The tooltip’s tail is placed at the bottom center, and we have two variables to control its dimensions:

We can do the exact same thing in more intuitive ways, like defining a background and then border (or padding) to create space for the tail:

background: #CC333F;
border-bottom: var(--h) solid #0000;

…or using box-shadow (or outline) for the outside color:

background: #CC333F;
box-shadow: 0 0 0 var(--h) #CC333F;

While these approaches are indeed easier, they require an extra declaration compared to the single border-image declaration we used. Plus, we’ll see later that border-image is really useful for accomplishing more complex shapes.

Here is a demo with the different directions so you can see how easy it is to adjust the above code to change the tail’s position.

We can fix this by setting limits to some values so the tail never falls outside the container. Two points of the polygon are concerned with the fix.

This:

calc(var(--p) + var(--b) / 2) 100%

…and this:

calc(var(--p) - var(--b) / 2) 100%

The first calc() needs to be clamped to 100% to avoid the overflow from the right side, and the second one needs to be clamped to 0% to avoid the overflow from the left side. We can use the min() and max() functions to establish the range limits:

clip-path: 
  polygon(0 100%, 0 0, 100% 0, 100% 100%,
    min(100%, var(--p) + var(--b) / 2) 100%,
    var(--p) calc(100% + var(--h)),
    max(0%, var(--p) - var(--b) / 2) 100%);

Adjusting The Tail Shape

Let’s integrate another variable, --x, into the clip-path() and use it to adjust the shape of the tail:

.tooltip {
  /* tail dimension */
  --b: 2em; /* base */
  --h: 1em; /* height*/

  --p: 50%;  /* tail position */
  --x: -2em; /* tail shape */

  border-image: fill 0 // 9999px
    conic-gradient(#CC333F 0 0); /* the color  */
  clip-path: 
    polygon(0 100%, 0 0, 100% 0, 100% 100%,
      min(100%, var(--p) + var(--b) / 2) 100%,
      calc(var(--p) + var(--x)) calc(100% + var(--h)),
      max(0%, var(--p) - var(--b) / 2) 100%);
}

The --x variable can be either positive or negative (using whatever unit you want, including percentages). What we’re doing is adding the variable that establishes the tail’s shape, --x, to the tail’s position, --p. In other words, we’ve updated this:

var(--p) calc(100% + var(--h))

…to this:

calc(var(--p) + var(--x)) calc(100% + var(--h))

And here is the outcome:

The tooltip’s tail points in either the right or left direction, depending on whether --x is a positive or negative value. Go ahead and use the range sliders in the following demo to see how the tooltip’s tail is re-positioned (--p) and re-shaped (--x) when adjusting two variables.

Note that I have updated the border-image outset to an impractically large value (9999px) instead of using the --h variable. The shape of the tail can be any type of triangle and can take a bigger area. Since there’s no way for us to know the exact value of the outset, we use that big value to make sure we have enough room to fill the tail in with color, no matter its shape.

Does the outset concept look strange to you? I know that working with border-image isn’t something many of us do all that often, so if this approach is tough to wrap your head around, definitely go check out my border-image article for a thorough demonstration of how it works.

Working With Gradients

Most of the trouble starts when we want to color the tooltip with a gradient instead of a flat color. Applying one color is simple — even with older techniques — but when it comes to gradients, it’s not easy to make the tail color flow smoothly into the container’s color.

But guess what? That’s no problem for us because we are already using a gradient in our border-image declaration!

border-image: fill 0 // var(--h)
  conic-gradient(#CC333F 0 0);

border-image only accepts gradients or images, so to produce a solid color, I had to use a gradient consisting of just one color. But if you change it into a “real” gradient that transitions between two or more colors, then you get your tooltip gradient. That’s all!

We start by declaring a background and border-radius on the .tooltip. Nothing fancy. Then, we move to the border-image property so that we can add a bar (highlighted in red in the last figure) that slightly overflows the container from the bottom. This part is a bit tricky, and here I invite you to read my previous article about border-image to understand this bit of CSS magic. From there, we add the clip-path and get our final shape.

.tooltip {
  /* triangle dimension */
  --b: 2em; /* base */
  --h: 1em; /* height */

--p: 50%; / position  /
--r: 1.2em; / the radius /
--c: #4ECDC4;

border-radius: var(--r);
clip-path: polygon(0 100%, 0 0, 100% 0, 100% 100%,
min(100%, var(--p) + var(--b) / 2) 100%,
var(--p) calc(100% + var(--h)),
max(0%, var(--p) - var(--b) / 2) 100%);
background: var(--c);
border-image: conic-gradient(var(--c) 0 0) fill 0/
var(--r) calc(100% - var(--p) - var(--b) / 2) 0 calc(var(--p) - var(--b) / 2)/
0 0 var(--h) 0;
} 

This visual glitch happens when the border-image overlaps with the rounded corners. To fix this, we need to adjust the border-radius value based on the tail’s position (--p).

We are not going to update all the radii, only the bottom ones and, more precisely, the horizontal values. I want to remind you that border-radius accepts up to eight values — each corner takes two values that set the horizontal and vertical directions — and in our case, we will update the horizontal value of the bottom-left and bottom-right corners:

border-radius:
  /* horizontal values */
  var(--r) 
  var(--r) 
  min(var(--r),100% - var(--p) - var(--b)/2) /* horizontal bottom-right */
  min(var(--r),var(--p) - var(--b)/2) /* horizontal bottom-left */
  /
  /* vertical values */
  var(--r)
  var(--r)
  var(--r)
  var(--r)

All the corner values are equal to --r, except for the bottom-left and bottom-right corners. Notice the forward slash (/), as it is part of the syntax that separates the horizontal and vertical radii values.

Now, let’s dig in and understand what is happening here. For the bottom-left corner, when the position of the tail is on the right, the position (--p) variable value will be big in order to keep the radius equal to the radius (--r), which serves as the minimum value. But when the position gets closer to the left, the value of --p decreases and, at some point, becomes smaller than the value of --r. The result is the value of the radius slowly decreasing until it reaches 0. It adjusts as the position updates!

I know that’s a lot to process, and a visual aid usually helps. Try slowly updating the tail’s position in the following demo to get a clearer picture of what’s happening.

This time, the border image creates a horizontal bar along the bottom that is positioned directly under the element and extends outside of its boundary so that we have enough color for the tail when it’s closer to the edge.

.tooltip {
  /* tail dimension */
  --b: 2em; /* base */
  --h: 1.5em; /* height */

--p: 50%; / position /
--x: 1.8em; / tail position /
--r: 1.2em; / the radius /
--c: #4ECDC4;

border-radius: var(--r) var(--r) min(var(--r), 100% - var(--p) - var(--b) / 2) min(var(--r), var(--p) - var(--b) / 2) / var(--r);
clip-path: polygon(0 100%, 0 0, 100% 0, 100% 100%,
min(100%, var(--p) + var(--b) / 2) 100%,
calc(var(--p) + var(--x)) calc(100% + var(--h)),
max(0%, var(--p) - var(--b) / 2) 100%);
background: var(--c);
border-image: conic-gradient(var(--c) 0 0) 0 0 1 0 / 0 0 var(--h) 0 / 0 999px var(--h) 999px;
} 

That’s why I do not use this approach when working with a simple isosceles triangle. This said, the method is perfectly fine, and in most cases, you may not see any visual glitches.

Putting Everything Together

We’ve looked at tooltips with tails that have equal sides, ones with tails that change shape, ones where the tail changes position and direction, ones with rounded corners, and ones that are filled in with gradients. What would it look like if we combined all of these examples into one mega-demo?

We can do it, but not by combining the approaches we’ve covered. We need another method, this time using a pseudo-element. No border-image for this one, I promise!

.tooltip {
  /* triangle dimension */
  --b: 2em; /* base */
  --h: 1em; /* height */

--p: 50%; / position /
--r: 1.2em; / the radius /

border-radius: var(--r) var(--r) min(var(--r), 100% - var(--p) - var(--b) / 2) min(var(--r), var(--p) - var(--b) / 2) / var(--r);
background: 0 0 / 100% calc(100% + var(--h))
linear-gradient(60deg, #CC333F, #4ECDC4); / the gradient /
position: relative;
z-index: 0;
}
.tooltip:before {
content: "";
position: absolute;
z-index: -1;
inset: 0 0 calc(-1*var(--h));
background-image: inherit;
clip-path:
polygon(50% 50%,
min(100%, var(--p) + var(--b) / 2) calc(100% - var(--h)),
var(--p) 100%,
max(0%, var(--p) - var(--b) / 2) calc(100% - var(--h)));
} 

The pseudo-element is used to create the tail at the bottom and notice how it inherits the gradient from the main element to simulate a continuous gradient that covers the entire shape.

Another important thing to note is the background-size declared in the .tooltip. The pseudo-element is covering a bigger area due to the negative bottom value, so we have to increase the height of the gradient so it covers the same area.

Can you figure it out? The code for all of them is included in my tooltip collection if you need a reference, but do try to make them yourself — it’s good exercise! Maybe you will find a different (or perhaps better) approach than mine.

]]>
hello@smashingmagazine.com (Temani Afif)
<![CDATA[Waiting For Spring (March 2024 Wallpapers Edition)]]> https://smashingmagazine.com/2024/02/desktop-wallpaper-calendars-march-2024/ https://smashingmagazine.com/2024/02/desktop-wallpaper-calendars-march-2024/ Thu, 29 Feb 2024 10:00:00 GMT March is here! With the days getting noticeably longer in the northern hemisphere, the sun coming out, and the flowers blooming, we are fueled with fresh energy. And even if spring is far away in your part of the world, you might feel that 2024 has gained full speed by now — the perfect opportunity to bring all those plans you made and ideas you’ve been carrying around to life!

To cater for some extra inspiration this March, artists and designers from across the globe once again challenged their creative skills and designed wallpapers for your desktop and mobile screens. They come in versions with and without a calendar for March 2024 and can be downloaded for free — a monthly tradition that has been going on here at Smashing Magazine for more than twelve years already.

As a little bonus goodie, we also added some March favorites from our wallpapers archives to the collection. Maybe you’ll spot one of your almost-forgotten favorites from the past in here, too? Thank you to everyone who shared their wallpapers with us this month! This post wouldn’t exist without you.

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

Breaking Superstitions

“Step into a world of delightful rebellion with our calendar feature for National Open an Umbrella Indoors Day. This vibrant illustration captures the spirit of breaking free from superstitions.” — Designed by PopArt Studio from Serbia.

Fresh Flow

“It’s time for the water to go down the mountains, it’s time for the rivers to get rid of ice blocks, it’s time for the ground to feed the plants, it’s time to go out and take a deep breath. I imagined these ideas with interlacing colored lines.” — Designed by Philippe Brouard from France.

Music From The Past

Designed by Ricardo Gimenes from Sweden.

Pika Pika Chu

Designed by Design Studio from India.

Hat’s Off to Good Fortune

“Flipping the switch on luck in 2024 with a tip of the hat! Here’s to welcoming good fortune, warmth, and endless smiles.” — Designed by HeyClipart from Greece.

Northern Lights

“Spring is getting closer, and we are waiting for it with open arms. This month, we want to enjoy discovering the northern lights. To do so, we are going to Alaska, where we have the faithful company of our friend White Fang.” — Designed by Veronica Valenzuela Jimenez from Spain.

Zombie’s Happy Hour

Designed by Ricardo Gimenes from Sweden.

Holi Celebration

“My deep love for colorful festivals, especially Holi, inspired me to create this captivating wallpaper. I wanted to visually express the essence of this cherished festival, infusing it with lively colors and playful elements. Crafting this wallpaper wasn’t just about completing a task; it was a chance to immerse myself in the joyous celebration of color and culture, sharing that happiness with others through my artistry.” — Designed by Cronix from Los Angeles.

Time To Wake Up

“Rays of sunlight had cracked into the bear’s cave. He slowly opened one eye and caught a glimpse of nature in blossom. Is it spring already? Oh, but he is so sleepy. He doesn’t want to wake up, not just yet. So he continues dreaming about those sweet sluggish days while everything around him is blooming.” — Designed by PopArt Studio from Serbia.

Queen Bee

“Spring is coming! Birds are singing, flowers are blooming, bees are flying… Enjoy this month!” — Designed by Melissa Bogemans from Belgium.

Spring Is Coming

“This March, our calendar design epitomizes the heralds of spring. Soon enough, you’ll be waking up to the singing of swallows, in a room full of sunshine, filled with the empowering smell of daffodil, the first springtime flowers. Spring is the time of rebirth and new beginnings, creativity and inspiration, self-awareness, and inner reflection. Have a budding, thriving spring!” — Designed by PopArt Studio from Serbia.

Botanica

Designed by Vlad Gerasimov from Georgia.

Fresh Lemons

Designed by Nathalie Ouederni from France.

Let’s Spring

“After some freezing months, it’s time to enjoy the sun and flowers. It’s party time, colours are coming, so let’s spring!” — Designed by Colorsfera from Spain.

March For Equality

“This March, we shine the spotlight on International Women’s Day, reflecting on the achieved and highlighting the necessity for a more equal and understanding world. These turbulent times that we are in require us to stand together unitedly and IWD aims to do that.” — Designed by PopArt Studio from Serbia.

Spring Bird

Designed by Nathalie Ouederni from France.

Awakening

“I am the kind of person who prefers the cold but I do love spring since it’s the magical time when flowers and trees come back to life and fill the landscape with beautiful colors.” — Designed by Maria Keller from Mexico.

Let’s Get Outside

Designed by Lívia Lénárt from Hungary.

Waiting For Spring

“As days are getting longer again and the first few flowers start to bloom, we are all waiting for spring to finally arrive.” — Designed by Naioo from Germany.

Ballet

“A day, even a whole month, isn’t enough to show how much a woman should be appreciated. Dear ladies, any day or month are yours if you decide so.” — Designed by Ana Masnikosa from Belgrade, Serbia.

MARCHing Forward

“If all you want is a little orange dinosaur MARCHing (okay, I think you get the pun) across your monitor, this wallpaper was made just for you! This little guy is my design buddy at the office and sits by (and sometimes on top of) my monitor. This is what happens when you have designer’s block and a DSLR.” — Designed by Paul Bupe Jr from Statesboro, GA.

Nothing To Do

Designed by Ricardo Gimenes from Sweden.

Orion’s Apple

Designed by Monia Gabhi from Canada.

St. Patrick’s Day

“On the 17th March, raise a glass and toast St. Patrick on St. Patrick’s Day, the Patron Saint of Ireland.” — Designed by Ever Increasing Circles from the United Kingdom.

Questions

“Doodles are slowly becoming my trademark, so I just had to use them to express this phrase I’m fond of recently. A bit enigmatic, philosophical. Inspiring, isn’t it?” — Designed by Marta Paderewska from Poland.

Explore The Forest

“This month, I want to go to the woods and explore my new world in sunny weather.” — Designed by Zi-Cing Hong from Taiwan.

The Great Beyond

“My inspiration came mostly from ‘The Greay from’. It’s about a dog and an astronaut exploring a strange new world.” — Designed by Lars Pauwels from Belgium.

Happy Birthday Dr. Seuss!

“March the 2nd marks the birthday of the most creative and extraordinary author ever, Dr. Seuss! I have included an inspirational quote about learning to encourage everyone to continue learning new things every day.” — Designed by Safia Begum from the United Kingdom.</p

Pizza Time

“Who needs an excuse to look at pizza all month?” — Designed by James Mitchell from the United Kingdom.

Tacos To The Moon And Back

Designed by Ricardo Gimenes from Sweden.

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Reporting Core Web Vitals With The Performance API]]> https://smashingmagazine.com/2024/02/reporting-core-web-vitals-performance-api/ https://smashingmagazine.com/2024/02/reporting-core-web-vitals-performance-api/ Tue, 27 Feb 2024 12:00:00 GMT This article is a sponsored by DebugBear

There’s quite a buzz in the performance community with the Interaction to Next Paint (INP) metric becoming an official Core Web Vitals (CWV) metric in a few short weeks. If you haven’t heard, INP is replacing the First Input Delay (FID) metric, something you can read all about here on Smashing Magazine as a guide to prepare for the change.

But that’s not what I really want to talk about. With performance at the forefront of my mind, I decided to head over to MDN for a fresh look at the Performance API. We can use it to report the load time of elements on the page, even going so far as to report on Core Web Vitals metrics in real time. Let’s look at a few ways we can use the API to report some CWV metrics.

Browser Support Warning

Before we get started, a quick word about browser support. The Performance API is huge in that it contains a lot of different interfaces, properties, and methods. While the majority of it is supported by all major browsers, Chromium-based browsers are the only ones that support all of the CWV properties. The only other is Firefox, which supports the First Contentful Paint (FCP) and Largest Contentful Paint (LCP) API properties.

So, we’re looking at a feature of features, as it were, where some are well-established, and others are still in the experimental phase. But as far as Core Web Vitals go, we’re going to want to work in Chrome for the most part as we go along.

First, We Need Data Access

There are two main ways to retrieve the performance metrics we care about:

  1. Using the performance.getEntries() method, or
  2. Using a PerformanceObserver instance.

Using a PerformanceObserver instance offers a few important advantages:

  • PerformanceObserver observes performance metrics and dispatches them over time. Instead, using performance.getEntries() will always return the entire list of entries since the performance metrics started being recorded.
  • PerformanceObserver dispatches the metrics asynchronously, which means they don’t have to block what the browser is doing.
  • The element performance metric type doesn’t work with the performance.getEntries() method anyway.

That all said, let’s create a PerformanceObserver:

const lcpObserver = new PerformanceObserver(list => {});

For now, we’re passing an empty callback function to the PerformanceObserver constructor. Later on, we’ll change it so that it actually does something with the observed performance metrics. For now, let’s start observing:

lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });

The first very important thing in that snippet is the buffered: true property. Setting this to true means that we not only get to observe performance metrics being dispatched after we start observing, but we also want to get the performance metrics that were queued by the browser before we started observing.

The second very important thing to note is that we’re working with the largest-contentful-paint property. That’s what’s cool about the Performance API: it can be used to measure very specific things but also supports properties that are mapped directly to CWV metrics. We’ll start with the LCP metric before looking at other CWV metrics.

Reporting The Largest Contentful Paint

The largest-contentful-paint property looks at everything on the page, identifying the biggest piece of content on the initial view and how long it takes to load. In other words, we’re observing the full page load and getting stats on the largest piece of content rendered in view.

We already have our Performance Observer and callback:

const lcpObserver = new PerformanceObserver(list => {});
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });

Let’s fill in that empty callback so that it returns a list of entries once performance measurement starts:

// The Performance Observer
const lcpObserver = new PerformanceObserver(list => {
  // Returns the entire list of entries
  const entries = list.getEntries();
});

// Call the Observer
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });

Next, we want to know which element is pegged as the LCP. It’s worth noting that the element representing the LCP is always the last element in the ordered list of entries. So, we can look at the list of returned entries and return the last one:

// The Performance Observer
const lcpObserver = new PerformanceObserver(list => {
  // Returns the entire list of entries
  const entries = list.getEntries();
  // The element representing the LCP
  const el = entries[entries.length - 1];
});

// Call the Observer
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });

The last thing is to display the results! We could create some sort of dashboard UI that consumes all the data and renders it in an aesthetically pleasing way. Let’s simply log the results to the console rather than switch gears.

// The Performance Observer
const lcpObserver = new PerformanceObserver(list => {
  // Returns the entire list of entries
  const entries = list.getEntries();
  // The element representing the LCP
  const el = entries[entries.length - 1];

  // Log the results in the console
  console.log(el.element);
});

// Call the Observer
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });

There we go!

It’s certainly nice knowing which element is the largest. But I’d like to know more about it, say, how long it took for the LCP to render:

// The Performance Observer
const lcpObserver = new PerformanceObserver(list => {

  const entries = list.getEntries();
  const lcp = entries[entries.length - 1];

  entries.forEach(entry => {
    // Log the results in the console
    console.log(
      The LCP is:,
      lcp.element,
      The time to render was ${entry.startTime} milliseconds.,
    );
  });
});

// Call the Observer
lcpObserver.observe({ type: "largest-contentful-paint", buffered: true });

// The LCP is:
// <h2 class="author-post__title mt-5 text-5xl">…</h2>
// The time to  render was 832.6999999880791 milliseconds.
Reporting First Contentful Paint

This is all about the time it takes for the very first piece of DOM to get painted on the screen. Faster is better, of course, but the way Lighthouse reports it, a “passing” score comes in between 0 and 1.8 seconds.

Just like we set the type property to largest-contentful-paint to fetch performance data in the last section, we’re going to set a different type this time around: paint.

When we call paint, we tap into the PerformancePaintTiming interface that opens up reporting on first paint and first contentful paint.

// The Performance Observer
const paintObserver = new PerformanceObserver(list => {
  const entries = list.getEntries();
  entries.forEach(entry => {
// Log the results in the console. console.log( The time to ${entry.name} took ${entry.startTime} milliseconds., ); }); }); // Call the Observer. paintObserver.observe({ type: "paint", buffered: true }); // The time to first-paint took 509.29999999981374 milliseconds. // The time to first-contentful-paint took 509.29999999981374 milliseconds.

Notice how paint spits out two results: one for the first-paint and the other for the first-contenful-paint. I know that a lot happens between the time a user navigates to a page and stuff starts painting, but I didn’t know there was a difference between these two metrics.

Here’s how the spec explains it:

“The primary difference between the two metrics is that [First Paint] marks the first time the browser renders anything for a given document. By contrast, [First Contentful Paint] marks the time when the browser renders the first bit of image or text content from the DOM.”

As it turns out, the first paint and FCP data I got back in that last example are identical. Since first paint can be anything that prevents a blank screen, e.g., a background color, I think that the identical results mean that whatever content is first painted to the screen just so happens to also be the first contentful paint.

But there’s apparently a lot more nuance to it, as Chrome measures FCP differently based on what version of the browser is in use. Google keeps a full record of the changelog for reference, so that’s something to keep in mind when evaluating results, especially if you find yourself with different results from others on your team.

Reporting Cumulative Layout Shift

How much does the page shift around as elements are painted to it? Of course, we can get that from the Performance API! Instead of largest-contentful-paint or paint, now we’re turning to the layout-shift type.

This is where browser support is dicier than other performance metrics. The LayoutShift interface is still in “experimental” status at this time, with Chromium browsers being the sole group of supporters.

As it currently stands, LayoutShift opens up several pieces of information, including a value representing the amount of shifting, as well as the sources causing it to happen. More than that, we can tell if any user interactions took place that would affect the CLS value, such as zooming, changing browser size, or actions like keydown, pointerdown, and mousedown. This is the lastInputTime property, and there’s an accompanying hasRecentInput boolean that returns true if the lastInputTime is less than 500ms.

Got all that? We can use this to both see how much shifting takes place during page load and identify the culprits while excluding any shifts that are the result of user interactions.

const observer = new PerformanceObserver((list) => {
  let cumulativeLayoutShift = 0;
  list.getEntries().forEach((entry) => {
    // Don't count if the layout shift is a result of user interaction.
    if (!entry.hadRecentInput) {
      cumulativeLayoutShift += entry.value;
    }
    console.log({ entry, cumulativeLayoutShift });
  });
});

// Call the Observer.
observer.observe({ type: "layout-shift", buffered: true });

Given the experimental nature of this one, here’s what an entry object looks like when we query it:

Pretty handy, right? Not only are we able to see how much shifting takes place (0.128) and which element is moving around (article.a.main), but we have the exact coordinates of the element’s box from where it starts to where it ends.

Reporting Interaction To Next Paint

This is the new kid on the block that got my mind wondering about the Performance API in the first place. It’s been possible for some time now to measure INP as it transitions to replace First Input Delay as a Core Web Vitals metric in March 2024. When we’re talking about INP, we’re talking about measuring the time between a user interacting with the page and the page responding to that interaction.

We need to hook into the PerformanceEventTiming class for this one. And there’s so much we can dig into when it comes to user interactions. Think about it! There’s what type of event happened (entryType and name), when it happened (startTime), what element triggered the interaction (interactionId, experimental), and when processing the interaction starts (processingStart) and ends (processingEnd). There’s also a way to exclude interactions that can be canceled by the user (cancelable).

const observer = new PerformanceObserver((list) => {
  list.getEntries().forEach((entry) => {
    // Alias for the total duration.
    const duration = entry.duration;
    // Calculate the time before processing starts.
    const delay = entry.processingStart - entry.startTime;
    // Calculate the time to process the interaction.
    const lag = entry.processingStart - entry.startTime;

    // Don't count interactions that the user can cancel.
    if (!entry.cancelable) {
      console.log(`INP Duration: ${duration}`);
      console.log(`INP Delay: ${delay}`);
      console.log(`Event handler duration: ${lag}`);
    }
  });
});

// Call the Observer.
observer.observe({ type: "event", buffered: true });
Reporting Long Animation Frames (LoAFs)

Let’s build off that last one. We can now track INP scores on our website and break them down into specific components. But what code is actually running and causing those delays?

The Long Animation Frames API was developed to help answer that question. It won’t land in Chrome stable until mid-March 2024, but you can already use it in Chrome Canary.

A long-animation-frame entry is reported every time the browser couldn’t render page content immediately as it was busy with other processing tasks. We get an overall duration for the long frame but also a duration for different scripts involved in the processing.

const observer = new PerformanceObserver((list) => {
  list.getEntries().forEach((entry) => {
    if (entry.duration > 50) {
      // Log the overall duration of the long frame.
      console.log(Frame took ${entry.duration} ms)
      console.log(Contributing scripts:)
      // Log information on each script in a table.
      entry.scripts.forEach(script => {
        console.table({
          // URL of the script where the processing starts
          sourceURL: script.sourceURL,
          // Total time spent on this sub-task
          duration: script.duration,
          // Name of the handler function
          functionName: script.sourceFunctionName,
          // Why was the handler function called? For example, 
          // a user interaction or a fetch response arriving.
          invoker: script.invoker
        })
      })
    }
  });
});

// Call the Observer.
observer.observe({ type: "long-animation-frame", buffered: true });

When an INP interaction takes place, we can find the closest long animation frame and investigate what processing delayed the page response.

There’s A Package For This

The Performance API is so big and so powerful. We could easily spend an entire bootcamp learning all of the interfaces and what they provide. There’s network timing, navigation timing, resource timing, and plenty of custom reporting features available on top of the Core Web Vitals we’ve looked at.

If CWVs are what you’re really after, then you might consider looking into the web-vitals library to wrap around the browser Performance APIs.

Need a CWV metric? All it takes is a single function.

webVitals.getINP(function(info) {
  console.log(info)
}, { reportAllChanges: true });

Boom! That reportAllChanges property? That’s a way of saying we only want to report data every time the metric changes instead of only when the metric reaches its final value. For example, as long as the page is open, there’s always a chance that the user will encounter an even slower interaction than the current INP interaction. So, without reportAllChanges, we’d only see the INP reported when the page is closed (or when it’s hidden, e.g., if the user switches to a different browser tab).

We can also report purely on the difference between the preliminary results and the resulting changes. From the web-vitals docs:

function logDelta({ name, id, delta }) {
  console.log(`${name} matching ID ${id} changed by ${delta}`);
}

onCLS(logDelta);
onINP(logDelta);
onLCP(logDelta);
Measuring Is Fun, But Monitoring Is Better

All we’ve done here is scratch the surface of the Performance API as far as programmatically reporting Core Web Vitals metrics. It’s fun to play with things like this. There’s even a slight feeling of power in being able to tap into this information on demand.

At the end of the day, though, you’re probably just as interested in monitoring performance as you are in measuring it. We could do a deep dive and detail what a performance dashboard powered by the Performance API is like, complete with historical records that indicate changes over time. That’s ultimately the sort of thing we can build on this — we can build our own real user monitoring (RUM) tool or perhaps compare Performance API values against historical data from the Chrome User Experience Report (CrUX).

Or perhaps you want a solution right now without stitching things together. That’s what you’ll get from a paid commercial service like DebugBear. All of this is already baked right in with all the metrics, historical data, and charts you need to gain insights into the overall performance of a site over time… and in real-time, monitoring real users.

DebugBear can help you identify why users are having slow experiences on any given page. If there is slow INP, what page elements are these users interacting with? What elements often shift around on the page and cause high CLS? Is the LCP typically an image, a heading, or something else? And does the type of LCP element impact the LCP score?

To help explain INP scores, DebugBear also supports the upcoming Long Animation Frames API we looked at, allowing you to see what code is responsible for interaction delays.

The Performance API can also report a list of all resource requests on a page. DebugBear uses this information to show a request waterfall chart that tells you not just when different resources are loaded but also whether the resources were render-blocking, loaded from the cache or whether an image resource is used for the LCP element.

In this screenshot, the blue line shows the FCP, and the red line shows the LCP. We can see that the LCP happens right after the LCP image request, marked by the blue “LCP” badge, has finished.

DebugBear offers a 14-day free trial. See how fast your website is, what’s slowing it down, and how you can improve your Core Web Vitals. You’ll also get monitoring alerts, so if there’s a web vitals regression, you’ll find out before it starts impacting Google search results.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[A Web Designer’s Accessibility Advocacy Toolkit]]> https://smashingmagazine.com/2024/02/web-designer-accessibility-advocacy-toolkit/ https://smashingmagazine.com/2024/02/web-designer-accessibility-advocacy-toolkit/ Mon, 26 Feb 2024 18:00:00 GMT Web accessibility can be challenging, particularly for clients unfamiliar with tech or compliance with The Americans With Disabilities Act (ADA). My role as a digital designer often involves guiding clients toward ADA-compliant web designs. I’ve acquired many strategies over the years for encouraging clients to adopt accessible web practices and invest in accessible user interfaces. It’s something that comes up with nearly every new project, and I decided to develop a personal toolkit to help me make the case.

Now, I am opening up my toolkit for you to have and use. While some of the strategies may be specific to me and my work, there are plenty more that cast a wider net and are more universally applicable. I’ve considered different real-life scenarios where I have had to make a case for accessibility. You may even personally identify with a few of them!

Please enjoy. As you do, remember that there is no silver bullet for “selling” accessibility. We can’t win everyone over with cajoling or terse arguments. My hope is that you are able to use this collection to establish partnerships with your colleagues and clients alike. Accessibility is something that anyone can influence at various stages in a project, and “winning” an argument isn’t exactly the point. It’s a bigger picture we’re after, one that influences how teams work together, changes habits, and develops a new level of empathy and understanding.

I begin with general strategies for discussing accessibility with clients. Following that, I provide specific language and responses you can use to introduce accessibility practices to your team and clients and advocate its importance while addressing client skepticism and concerns. Use it as a starting point and build off of it so that it incorporates points and scenarios that are more specific to your work. I sincerely hope it helps you advance accessible practices.

General Strategies

We’ll start with a few ways you can position yourself when interacting with clients. By adopting a certain posture, we can set ourselves up to be the experts in the room, the ones with solutions rather than arguments.

Showcasing Expertise

I tend to establish my expertise and tailor the information to the client’s understanding of accessibility, which could be not very much. For those new to accessibility, I offer a concise overview of its definition, evaluation, and business impact. For clients with a better grasp of accessible practices, I like to use the WCAG as a point of reference for helping frame productive discussions based on substance and real requirements.

Aligning With Client Goals

I connect accessibility to the client’s goals instead of presenting accessibility as a moral imperative. No one loves being told what to do, and talking to clients on their terms establishes a nice bridge for helping them connect the dots between the inherent benefits of accessible practices and what they are trying to accomplish. The two aren’t mutually exclusive!

In fact, there are many clear benefits for apps that make accessibility a first-class feature. Refer to the “Accessibility Benefits” section to help describe those benefits to your colleagues and clients.

Defining Accessibility In The Project Scope

I outline accessibility goals early, typically when defining the project scope and requirements. Baking accessibility into the project scope ensures that it is at least considered at this crucial stage where decisions are being made for everything from expected outcomes to architectural requirements.

User stories and personas are common artifacts for which designers are often responsible. Use these as opportunities to define accessibility in the same breath as defining who the users are and how they interact with the app. Framing stories and outcomes as user interactions in an “as-when-then-so” format provides an opening to lead with accessibility:

As a user, when I __, then I expect that __, so I can _.

Fill in the blanks. I think you’ll find that user’s expected outcomes are typically aligned with accessible experiences. Federico Francioni published his take on developing inclusive user personas, building off other excellent resources, including Microsoft’s Inclusive Design guidelines.

Being Ready With Resources and Examples

I maintain a database of resources for clients interested in learning more about accessibility. Sharing anecdotes, such as clients who’ve seen benefits from accessibility or examples of companies penalized for non-compliance, can be very impactful.

Microsoft is helpful here once again with a collection of brief videos that cover a variety of uses, from informing your colleagues and clients on basic accessibility concepts to interviews with accessibility professionals and case studies involving real users.

There are a few go-to resources I’ve bookmarked to share with clients who are learning about accessibility for the first time. What I like about these is the approachable language and clarity. “Learn Accessibility” from web.dev is especially useful because it’s framed as a 21-part course. That may sound daunting, but it’s organized in small chunks that make it manageable, and sometimes I will simply point to the Glossary to help clients understand the concepts we discuss.

And where “Learn Accessibility” is focused on specific components of accessibility, I find that the Inclusive Design Principles site has a perfect presentation of the concepts and guiding principles of inclusion and accessibility on the web.

Meanwhile, I tend to sit beside a client to look at The A11Y Project. I pick a few resources to go through. Otherwise, the amount of information can be overwhelming. I like to offer this during a project’s planning phase because the site is focused on actionable strategies that help scope work.

Leveraging User Research

User research that is specific to the client’s target audience is more convincing than general statistics alone. When possible, I try to understand those user’s needs, including what they expect, what sort of technology they use to browse online, and where they are geographically. Painting a more complete picture of users — based on real-life factors and information — offers a more human perspective and plants the first seeds of empathy in the design process.

Web analytics are great for identifying who users are and how they currently interact with the app. At the same time, they are also wrought with caveats as far as accuracy goes, depending on the tool you use and how you collect your data. That said, I use the information to support my user persona decisions and the specific requirements I write. Analytics add nice brush strokes to the picture but do not paint the entire view. So, leverage it!

The big caveat with web analytics? There’s no way to identify traffic that uses assistive tech. That’s a good thing in general as far as privacy goes, but it does mean that researching the usability of your site is best done with real users — as it is with any user research, really. The A11Y Project has excellent resources for testing screen readers, including a link to this Smashing Magazine article about manual accessibility testing by Eric Bailey as well as a vast archive of links pointing to other research.

That said, web analytics can still be very useful to help accommodate other impairments, for example, segmenting traffic by age (for improving accessibility for low vision) and geography (for improving performance gaps for those on low-powered devices). WebAIM also provides insights in a report they produced from a 2018 survey of users who report having low vision.

Leaving Room For Improvements

Chances are that your project will fall at least somewhat short of your accessibility plans. It happens! I see plenty of situations where a late deadline translates into rushed work that sacrifices quality for speed, and accessibility typically falls victim to degraded quality.

I keep track of these during the project’s various stages and attempt to document them. This way, there’s already a roadmap for inclusive and accessible improvements in subsequent releases. It’s scoped, backlogged, and ready to drop into a sprint.

For projects involving large sites with numerous accessibility issues, I emphasize that partial accessibility compliance is not the same as actual compliance. I often propose phased solutions, starting with incremental changes that fit within the current scope and budget.

And remember, just because something passes a WCAG success criterion doesn’t necessarily mean it is accessible. Passing tests is a good sign, but there will always be room for improvement.

Commonly Asked Accessibility Questions

Accessibility is a broad topic, and we can’t assume that everyone knows what constitutes an “accessible” interface. Often, when I get pushback from a colleague or client, it’s because they simply do not have the same context that I do. That’s why I like to keep a handful of answers to commonly asked questions in my back pocket. It’s amazing how answering the “basics” leads to productive discussions filled with substance rather than ones grounded in opinion.

What Do We Mean By “Web Accessibility”?

When we say “web accessibility,” we’re generally talking about making online content available and usable for anyone with a disability, whether it’s a permanent impairment or a temporary one. It’s the practice of removing friction that excludes people from gaining access to content or from completing a task. That usually involves complying with a set of guidelines that are designed to remove those barriers.

Who Creates Accessibility Guidelines?

The Web Content Accessibility Guidelines (WCAG) are created by a working group of the World Wide Web Consortium (W3C) called the Web Accessibility Initiative (WAI). The W3C develops guidelines and principles to help designers, developers, and authors like us create web experiences based on a common set of standards, including those for HTML, CSS, internationalization, privacy, security, and yes, accessibility, among many, many other areas. The WAI working group maintains the accessibility standards we call WCAG.

Who Needs Web Accessibility?

Twenty-seven percent of the U.S. population has a disability, emphasizing the widespread need for accessible web design. WCAG primarily focuses on three groups:

  1. Cognitive or learning disabilities,
  2. Visual impairments,
  3. Motor skills.

When we make web experiences that solve these issues based on established guidelines, we’re not only doing good for those who are directly impacted by impairment but those who may be impaired in less direct ways as well, such as establishing large target sizes for those tapping a touchscreen phone with their hands full, or using proper color contrast for those navigating a screen in bright sunlight. Everyone needs — and benefits from — accessibility!

Further Reading

How Is Web Accessibility Regulated?

The Americans with Disabilities Act (ADA) is regulated by the Civil Rights Division of the U.S. Department of Justice, which was established by the Civil Rights Act of 1957. Even though there is a lot of bureaucracy in that last sentence, it’s reassuring to know the U.S. government not only believes in web accessibility but enforces it as well.

Non-compliance can result in legal action, with first-time ADA violations leading to fines of up to $75,000, increasing to $150,000 for subsequent violations. The number of lawsuits for alleged ADA breaches has surged in recent years, with more than 4,500 lawsuits filed in 2023 against sites that fail to comply with WCAG AA 2.1 alone — roughly 500 more lawsuits than 2022!

Further Reading

How Is Web Accessibility Evaluated?

Web accessibility is something we can test against. Many tools have been created to audit sites on the spot based on WCAG success criteria that specify accessible requirements. That would be a standards-based evaluation using WCAG as a reference point for auditing compliance.

WebAIM has an excellent page that compares different types of accessibility testing, reporting, and tooling. They are also quick to note that automated testing, while convenient, is not a comprehensive way to audit accessibility. Automated tools that scan websites may be able to pick up instances where mistakes in the HTML might contribute to accessibility issues and where color contrasts are insufficient. But they cannot replace or perfectly imitate a real-life person. Testing in real browsers with real people continues to be the most effective way to truly evaluate accessible web experiences.

This isn’t to say automated tools should not be part of an accessibility testing suite. In fact, they often highlight areas you may have overlooked. Even false positives are good in the sense that they force you to pause and look more closely at something. Some of the most widely used automated tools include the following:

These are just a few of the most frequent tools I use in my own testing, but there are many more, and the WAI maintains an extensive list of available tools that are worth considering. But again, remember that automated testing is not a one-to-one replacement for testing with real users.

Checklists can be handy for ensuring you are covering your bases:

Accessibility Benefits

When discussing accessibility, I find the most effective arguments are ones that are framed around the interests of clients and stakeholders. That way, the discussion stays within scope and helps everyone see that proper accessibility practices actually benefit business goals. Speaking in business terms is something I openly embrace because it typically supports my case.

The following are a few ways I would like to explain the positive impacts that accessibility has on business goals.

Case Studies

Sometimes, the most convincing approach is to offer examples of companies that have committed to accessible practices and come out better for it. And there are plenty of examples! I like to use case studies and reports in a similar industry or market for a more apples-to-apples comparison that stakeholders can identify with.

That said, there are great general cases involving widely respected companies and brands, including This American Life and Tesco, that demonstrate benefits such as increased organic search traffic, enhanced user engagement, and reduced site load times. For a comprehensive guide on framing these benefits, I refer to the W3C’s resource on building the business case for accessibility.

What To Say To Your Client

Let me share how focusing on accessibility can directly benefit your business. For instance, in 2005, Legal & General revamped their website with accessibility in mind and saw a substantial increase in organic search traffic exceeding 50%. This isn’t just about compliance; it’s about reaching a wider audience more effectively. By making your site more accessible, we can improve user engagement and potentially decrease load times, enhancing the overall user experience. This approach not only broadens your reach to include users with disabilities but also boosts your site’s performance in search rankings. In short, prioritizing accessibility aligns with your goal to increase online visibility and customer engagement.

Further Reading

The Curb-Cut Effect

The “curb-cut effect” refers to how features originally designed for accessibility end up benefiting a broader audience. This concept helps move the conversation away from limiting accessibility as an issue that only affects the minority.

Features like voice control, auto-complete, and auto-captions — initially created to enhance accessibility — have become widely used and appreciated by all users. This effect also includes situational impairments, like using a phone in bright sunlight or with one hand, expanding the scope of who benefits from accessible design. Big companies have found that investing in accessibility can spur innovation.

What To Say To Your Client

Let’s consider the ‘curb-cut effect’ in the context of your website. Originally, curb cuts were designed for wheelchair users, but they ended up being useful for everyone, from parents with strollers to travelers with suitcases. Similarly, many digital accessibility features we implement can enhance the experience for all your users, not just those with disabilities. For example, features like voice control and auto-complete were developed for accessibility but are now widely used by everyone. This isn’t just about inclusivity; it’s about creating a more versatile and user-friendly website. By incorporating these accessible features, we’re not only catering to a specific group but also improving the overall user experience, which can lead to increased engagement and satisfaction across your entire customer base.

Further Reading

SEO Benefits

I would like to highlight the SEO benefits that come with accessible best practices. Things like nicely structured sitemaps, a proper heading outline, image alt text, and unique link labels not only improve accessibility for humans but for search engines as well, giving search crawlers clear context about what is on the page. Stakeholders and clients care a lot about this stuff, and if they are able to come around on accessibility, then they’re effectively getting a two-for-one deal.

What To Say To Your Client

Focusing on accessibility can boost your website’s SEO. Accessible features, like clear link names and organized sitemaps, align closely with what search engines prioritize. Google even includes accessibility in its Lighthouse reporting. This means that by making your site more accessible, we’re also making it more visible and attractive to search engines. Moreover, accessible websites tend to have cleaner, more structured code. This not only improves website stability and loading times but also enhances how search engines understand and rank your content. Essentially, by improving accessibility, we’re also optimizing your site for better search engine performance, which can lead to increased traffic and higher search rankings.

Further Reading

Better Brand Alignment

Incorporating accessibility into web design can significantly elevate how users perceive a brand’s image. The ease of use that comes with accessibility not only reflects a brand’s commitment to inclusivity and social responsibility but also differentiates it in competitive markets. By prioritizing accessibility, brands can convey a personality that is thoughtful and inclusive, appealing to a broader, more diverse customer base.

What To Say To Your Client

Implementing web accessibility is more than just a compliance measure; it’s a powerful way to enhance your brand image. In the competitive landscape of e-commerce, having an accessible website sets your brand apart. It shows your commitment to inclusivity, reaching out to every potential customer, regardless of their abilities. This not only resonates with a diverse audience but also positions your brand as socially responsible and empathetic. In today’s market, where consumers increasingly value corporate responsibility, this can be a significant differentiator for your brand, helping to build a loyal customer base and enhance your overall brand reputation.

Further Reading

Cost Efficiency

I mentioned earlier how developing accessibility enhances SEO like a two-for-one package. However, there are additional cost savings that come with implementing accessibility during the initial stages of web development rather than retrofitting it later. A proactive approach to accessibility saves on the potential high costs of auditing and redesigning an existing site and helps avoid expensive legal repercussions associated with non-compliance.

What To Say To Your Client

Retrofitting a website for accessibility can be quite expensive. Consider the costs of conducting an accessibility audit, followed by potentially extensive (and expensive) redesign and redevelopment work to rectify issues. These costs can significantly exceed the investment required to build accessibility into the website from the start. Additionally, by making your site accessible now, we can avoid the legal risks and potential fines associated with ADA non-compliance. Investing in accessibility early on is a cost-effective strategy that pays off in the long run, both financially and in terms of brand reputation. Besides, with the SEO benefits that we get from implementing accessibility, we’re saving lots of money and work that would otherwise be sunk into redevelopment.

Further Reading

Addressing Client Concerns

Still getting pushback? There are certain arguments I hear time and again, and I have started keeping a collection of responses to them. In some cases, I have left placeholder instructions for tailoring the responses to your project.

“Our users don’t need it.”

Statistically, 27% of the U.S. population does have some form of disability that affects their web use. [Insert research on your client’s target audience, if applicable.] Besides permanent impairments, we should also take into account situational ones. For example, imagine one of your potential clients trying to access your site on a sunny golf course, struggling to see the screen due to glare, or someone in a noisy subway unable to hear audio content. Accessibility features like high contrast modes or captions can greatly enhance their experience. By incorporating accessibility, we’re not only catering to users with disabilities but also ensuring a seamless experience for anyone in less-than-ideal conditions. This approach ensures that no potential client is left out, aligning with the goal to reach and engage a wider audience.

“Our competitors aren’t doing it.”

It’s interesting that your competitors haven’t yet embraced accessibility, but this actually presents a unique opportunity for your brand. Proactively pursuing accessibility not only protects you from the same legal exposure your competitors face but also positions your brand as a leader in customer experience. By prioritizing accessibility when others are not, you’re differentiating your brand as more inclusive and user-friendly. This both appeals to a broader audience and showcases your brand’s commitment to social responsibility and innovation.

“We’ll do it later because it’s too expensive.”

I understand concerns about timing and costs. However, it’s important to note that integrating accessibility from the start is far more cost-effective than retrofitting it later. If accessibility is considered after development is complete, you will face additional expenses for auditing accessibility, followed by potentially extensive work involving a redesign and redevelopment. This process can be significantly more expensive than building in accessibility from the beginning. Furthermore, delaying accessibility can expose your business to legal risks. With the increasing number of lawsuits for non-compliance with accessibility standards, the cost of legal repercussions could far exceed the expense of implementing accessibility now. The financially prudent move is to work on accessibility now.

“We’ve never had complaints.”

It’s great to hear that you haven’t received complaints, but it’s important to consider that users who struggle to access your site might simply choose not to return rather than take the extra step to complain about it. This means you could potentially be missing out on a significant market segment. Additionally, when accessibility issues do lead to complaints, they can sometimes escalate into legal cases. Proactively addressing accessibility can help you tap into a wider audience and mitigate the risk of future lawsuits.

“It will affect the aesthetics of the site.”

Accessibility and visual appeal can coexist beautifully. I can show you examples of websites that are both compliant and visually stunning, demonstrating that accessibility can enhance rather than detract from a site’s design. Additionally, when we consider specific design features from an accessibility standpoint, we often find they actually improve the site’s overall usability and SEO, making the site more intuitive and user-friendly for everyone. Our goal is to blend aesthetics with functionality, creating an inclusive yet visually appealing online presence.
Handling Common Client Requests

This section looks at frequent scenarios I’ve encountered in web projects where accessibility considerations come into play. Each situation requires carefully balancing the client’s needs/wants with accessibility standards. I’ll leave placeholder comments in the examples so you are able to address things that are specific to your project.

The Client Directly Requests An Inaccessible Feature

When clients request features they’ve seen online — like unfocusable carousels and complex auto-playing animations — it’s crucial to discuss them in terms that address accessibility concerns. In these situations, I acknowledge the appealing aspects of their inspirations but also highlight their accessibility limitations.

That’s a really neat feature, and I like it! That said, I think it’s important to consider how users interact with it. [Insert specific issues that you note, like carousels without pause buttons or complex animations.] My recommendation is to take the elements that work well &mdahs; [insert specific observation] — and adapt them into something more accessible, such as [Insert suggestion]. This way, we maintain the aesthetic appeal while ensuring the website is accessible and enjoyable for every visitor.

The Client Provides Inaccessible Content

This is where we deal with things like non-descriptive page titles, link names, form labels, and color contrasts for a better “reading” experience.

Page Titles

Sometimes, clients want page titles to be drastically different than the link in the navigation bar. Usually, this is because they want a more detailed page title while keeping navigation links succinct.

I understand the need for descriptive and engaging page titles, but it’s also essential to maintain consistency with the navigation bar for accessibility. Here’s our recommendation to balance both needs:
  • Keyword Consistency: You can certainly have a longer page title to provide more context, but it should include the same key terms as the navigation link. This ensures that users, especially those using screen readers to announce content, can easily understand when they have correctly navigated between pages.
  • Succinct Titles With Descriptive Subtitles: Another approach is to keep the page title succinct, mirroring the navigation link, and then add a descriptive tagline or subtitle on the page itself. This way, the page maintains clear navigational consistency while providing detailed context in the subtitle. These approaches aim to align the user’s navigation experience with their expectations, ensuring clarity and accessibility.

Links

A common issue with web content provided by clients is the use of non-descriptive calls to action with phrases and link labels, like “Read More” or “Click Here.” Generic terms can be confusing for users, particularly for those using screen readers, as they don’t provide context about what the link leads to or the nature of the content on the other end.

I’ve noticed some of the link labels say things like “Read More” or “Click Here” in the design. I would consider revising them because they could be more descriptive, especially for those relying on screen readers who have to put up with hearing the label announced time and again. We recommend labels that clearly indicate where the link leads. [Provide a specific example.] This approach makes links more informative and helps all users alike by telling them in advance what to expect when clicking a certain link. It enhances the overall user experience by providing clarity and context.

Forms

Proper form labels are a critical aspect of accessible web design. Labels should clearly indicate the purpose of each input field, whether it’s required, and the expected format of the information. This clarity is essential for all users, especially for those using screen readers or other assistive technologies. Plus, there are accessible approaches to pairing labels and inputs that developers ought to be familiar with.

It’s important that each form field is clearly labeled to inform users about the type of data expected. Additionally, indicating which fields are required and providing format guidelines can greatly enhance the user experience. [Provide a specific example from the client’s content, e.g., we can use ‘Phone (10 digits, no separators)’ for a phone number field to clearly indicate the format.] These labels not only aid in navigation and comprehension for all users but also ensure that the forms are accessible to those using assistive technologies. Well-labeled forms improve overall user engagement and reduce the likelihood of errors or confusion.

Brand Palette

Clients will occasionally approach me with color palettes that produce too low of contrast when paired together. This happens when, for instance, on a website with a white background, a client wants to use their brand accent color for buttons, but that color simply blends into the background color, making it difficult to read. The solution is usually creating a slightly adjusted tint or shade that’s used specifically for digital interfaces — UI colors, if you will. Atul Varma’s “Accessible Color Palette Builder” is a great starting point, as is this UX Lift lander with alternatives.

We recommend expanding the brand palette with color values that work more effectively in web designs. By adjusting the tint or shade just a bit, we can achieve a higher level of contrast between colors when they are used together. Colors render differently depending on the device and screen they are on, and even though we might be using colors consistent with brand identity, those colors will still display differently to users. By adding colors that are specifically designed for web use, we can enhance the experience for our users while staying true to the brand’s essence.

Suggesting An Accessible Feature To Clients

Proactively suggesting features like sitemaps, pause buttons, and focus indicators is crucial. I’ll provide tips on how to effectively introduce these features to clients, emphasizing their importance and benefit.

Sitemap

Sitemaps play a crucial role in both accessibility and SEO, but clients sometimes hesitate to include them due to concerns about their visual appeal. The challenge is to demonstrate the value of site maps without compromising the site’s overall aesthetic.

I understand your concerns about the visual appeal of sitemaps. However, it’s important to consider their significant role in both accessibility and SEO. For users with screen readers, a sitemap greatly simplifies site navigation. From an SEO perspective, it acts like a directory, helping search engines effectively index all your pages, making your site more discoverable and user-friendly. To address the aesthetic aspect, let’s look at how major companies like Apple and Microsoft incorporate sitemaps. Their designs are minimal yet consistent with the site’s overall look and feel. [If applicable, show how a competitor is using sitemaps.] By incorporating a well-designed sitemap, we can improve user experience and search visibility without sacrificing the visual quality of your website.

Accessible Carousels

Carousels are contentious design features. While some designers are against them and have legitimate reasons for it, I believe that with the right approach, they can be made accessible and effective. There are plenty of resources that provide guidance on creating accessible carousels.

When a client requests a home page carousel in a new site design, it’s worth considering alternative solutions that can avoid the common pitfalls of carousels, such as low click-through rates, increased load times, content being pushed below the fold, and potentially annoying auto-advancing features.

I see the appeal of using a carousel on your homepage, but there are a few considerations to keep in mind. Carousels often have low engagement rates and can slow down the site. They also tend to move key content below the fold, which might not be ideal for user engagement. An auto-advancing carousel can also be distracting for users. Instead, we could explore alternative design solutions that effectively convey your message without these drawbacks. [Insert recommendation, e.g., For instance, we could use a hero image or video with a strong call-to-action or a grid layout that showcases multiple important segments at once.] These alternatives can be more user-friendly and accessible while still achieving the visual and functional goals of a carousel.

If we decide to use a carousel, I make a point of discussing the necessary accessibility features with the client right from the start. Many clients aren’t aware that elements like pause buttons are crucial for making auto-advancing carousels accessible. To illustrate this, I’ll show them examples of accessible carousel designs that incorporate these features effectively.

Further Reading

Pause Buttons

Any animation that starts automatically, lasts more than five seconds, and is presented in parallel with other content, needs a pause button per WCAG Success Criterion 2.2.2. A common scenario is when clients want a full-screen video on their homepage without a pause button. It’s important to explain the necessity of pause buttons for meeting accessibility standards and ensuring user comfort without compromising the website’s aesthetics.

I understand your desire for a dynamic, engaging homepage with a full-screen video. However, it’s essential for accessibility purposes that any auto-playing animation that is longer than five seconds includes a pause button. This is not just about compliance; it’s about ensuring that all visitors, including those with disabilities, can comfortably use your site.

The good news is that pause buttons can be designed to be sleek and non-intrusive, complementing your site’s aesthetics rather than detracting from them. Think of it like the sound toggle buttons on videos. They’re there when you need them, but they don’t distract from the viewing experience. I can show you some examples of beautifully integrated pause buttons that maintain the immersive feel of the video while ensuring accessibility standards are met.
Conclusion

That’s it! This is my complete toolkit for discussing web accessibility with colleagues and clients at the start of new projects. It’s not always easy to make a case, which is why I try to appeal from different angles, using a multitude of resources and research to support my case. But with practice, care, and true partnership, it’s possible to not only influence the project but also make accessibility a first-class feature in the process.

Please use the resources, strategies, and talking points I have provided. I share them to help you make your case to your own colleagues and clients. Together, incrementally, we can take steps toward a more accessible web that is inclusive to all people.

And when in doubt, remember the core principles we covered:

  • Show your expertise: Adapt accessibility discussions to fit the client’s understanding, offering basic or in-depth explanations based on their familiarity.
  • Align with client goals: Connect accessibility with client-specific benefits, such as SEO and brand enhancement.
  • Define accessibility in project scope: Include accessibility as an integral part of the design process and explain how it is evaluated.
  • Be prepared with Resources: Keep a collection of relevant resources, including success stories and the consequences of non-compliance.
  • Utilize User Research: Use targeted user research to inform design choices, demonstrating accessibility’s broad impact.
  • Manage Incremental Changes: Suggest iterative changes for large projects to address accessibility in manageable steps.
]]>
hello@smashingmagazine.com (Yichan Wang)
<![CDATA[Vanilla JavaScript, Libraries, And The Quest For Stateful DOM Rendering]]> https://smashingmagazine.com/2024/02/vanilla-javascript-libraries-quest-stateful-dom-rendering/ https://smashingmagazine.com/2024/02/vanilla-javascript-libraries-quest-stateful-dom-rendering/ Thu, 22 Feb 2024 18:00:00 GMT In his seminal piece “The Market For Lemons”, renowned web crank Alex Russell lays out the myriad failings of our industry, focusing on the disastrous consequences for end users. This indignation is entirely appropriate according to the bylaws of our medium.

Frameworks factor highly in that equation, yet there can also be good reasons for front-end developers to choose a framework, or library for that matter: Dynamically updating web interfaces can be tricky in non-obvious ways. Let’s investigate by starting from the beginning and going back to the first principles.

Markup Categories

Everything on the web starts with markup, i.e. HTML. Markup structures can roughly be divided into three categories:

  1. Static parts that always remain the same.
  2. Variable parts that are defined once upon instantiation.
  3. Variable parts that are updated dynamically at runtime.

For example, an article’s header might look like this:

<header>
  <h1>«Hello World»</h1>
  <small>«123» backlinks</small>
</header>

Variable parts are wrapped in «guillemets» here: “Hello World” is the respective title, which only changes between articles. The backlinks counter, however, might be continuously updated via client-side scripting; we’re ready to go viral in the blogosphere. Everything else remains identical across all our articles.

The article you’re reading now subsequently focuses on the third category: Content that needs to be updated at runtime.

Color Browser

Imagine we’re building a simple color browser: A little widget to explore a pre-defined set of named colors, presented as a list that pairs a color swatch with the corresponding color value. Users should be able to search colors names and toggle between hexadecimal color codes and Red, Blue, and Green (RGB) triplets. We can create an inert skeleton with just a little bit of HTML and CSS:

See the Pen Color Browser (inert) [forked] by FND.

Client-Side Rendering

We’ve grudgingly decided to employ client-side rendering for the interactive version. For our purposes here, it doesn’t matter whether this widget constitutes a complete application or merely a self-contained island embedded within an otherwise static or server-generated HTML document.

Given our predilection for vanilla JavaScript (cf. first principles and all), we start with the browser’s built-in DOM APIs:

function renderPalette(colors) {
  let items = [];
  for(let color of colors) {
    let item = document.createElement("li");
    items.push(item);

    let value = color.hex;
    makeElement("input", {
      parent: item,
      type: "color",
      value
    });
    makeElement("span", {
      parent: item,
      text: color.name
    });
    makeElement("code", {
      parent: item,
      text: value
    });
  }

  let list = document.createElement("ul");
  list.append(...items);
  return list;
}
Note:
The above relies on a small utility function for more concise element creation:
function makeElement(tag, { parent, children, text, ...attribs }) {
  let el = document.createElement(tag);

  if(text) {
    el.textContent = text;
  }

  for(let [name, value] of Object.entries(attribs)) {
    el.setAttribute(name, value);
  }

  if(children) {
    el.append(...children);
  }

  parent?.appendChild(el);
  return el;
}
You might also have noticed a stylistic inconsistency: Within the items loop, newly created elements attach themselves to their container. Later on, we flip responsibilities, as the list container ingests child elements instead.

Voilà: renderPalette generates our list of colors. Let’s add a form for interactivity:

function renderControls() {
  return makeElement("form", {
    method: "dialog",
    children: [
      createField("search", "Search"),
      createField("checkbox", "RGB")
    ]
  });
}

The createField utility function encapsulates DOM structures required for input fields; it’s a little reusable markup component:

function createField(type, caption) {
  let children = [
    makeElement("span", { text: caption }),
    makeElement("input", { type })
  ];
  return makeElement("label", {
    children: type === "checkbox" ? children.reverse() : children
  });
}

Now, we just need to combine those pieces. Let’s wrap them in a custom element:

import { COLORS } from "./colors.js"; // an array of { name, hex, rgb } objects

customElements.define("color-browser", class ColorBrowser extends HTMLElement {
  colors = [...COLORS]; // local copy

  connectedCallback() {
    this.append(
      renderControls(),
      renderPalette(this.colors)
    );
  }
});

Henceforth, a <color-browser> element anywhere in our HTML will generate the entire user interface right there. (I like to think of it as a macro expanding in place.) This implementation is somewhat declarative1, with DOM structures being created by composing a variety of straightforward markup generators, clearly delineated components, if you will.

1 The most useful explanation of the differences between declarative and imperative programming I’ve come across focuses on readers. Unfortunately, that particular source escapes me, so I’m paraphrasing here: Declarative code portrays the what while imperative code describes the how. One consequence is that imperative code requires cognitive effort to sequentially step through the code’s instructions and build up a mental model of the respective result.

Interactivity

At this point, we’re merely recreating our inert skeleton; there’s no actual interactivity yet. Event handlers to the rescue:

class ColorBrowser extends HTMLElement {
  colors = [...COLORS];
  query = null;
  rgb = false;

  connectedCallback() {
    this.append(renderControls(), renderPalette(this.colors));
    this.addEventListener("input", this);
    this.addEventListener("change", this);
  }

  handleEvent(ev) {
    let el = ev.target;
    switch(ev.type) {
    case "change":
      if(el.type === "checkbox") {
        this.rgb = el.checked;
      }
      break;
    case "input":
      if(el.type === "search") {
        this.query = el.value.toLowerCase();
      }
      break;
    }
  }
}
Note:
handleEvent means we don’t have to worry about function binding. It also comes with various advantages. Other patterns are available.

Whenever a field changes, we update the corresponding instance variable (sometimes called one-way data binding). Alas, changing this internal state2 is not reflected anywhere in the UI so far.

2 In your browser’s developer console, check document.querySelector("color-browser").query after entering a search term.

Note that this event handler is tightly coupled to renderControls internals because it expects a checkbox and search field, respectively. Thus, any corresponding changes to renderControls — perhaps switching to radio buttons for color representations — now need to take into account this other piece of code: action at a distance! Expanding this component’s contract to include field names could alleviate those concerns.

We’re now faced with a choice between:

  1. Reaching into our previously created DOM to modify it, or
  2. Recreating it while incorporating a new state.
Rerendering

Since we’ve already defined our markup composition in one place, let’s start with the second option. We’ll simply rerun our markup generators, feeding them the current state.

class ColorBrowser extends HTMLElement {
  // [previous details omitted]

  connectedCallback() {
    this.#render();
    this.addEventListener("input", this);
    this.addEventListener("change", this);
  }

  handleEvent(ev) {
    // [previous details omitted]
    this.#render();
  }

  #render() {
    this.replaceChildren();
    this.append(renderControls(), renderPalette(this.colors));
  }
}

We’ve moved all rendering logic into a dedicated method3, which we invoke not just once on startup but whenever the state changes.

3 You might want to avoid private properties, especially if others might conceivably build upon your implementation.

Next, we can turn colors into a getter to only return entries matching the corresponding state, i.e. the user’s search query:

class ColorBrowser extends HTMLElement {
  query = null;
  rgb = false;

  // [previous details omitted]

  get colors() {
    let { query } = this;
    if(!query) {
      return [...COLORS];
    }

    return COLORS.filter(color => color.name.toLowerCase().includes(query));
  }
}
Note:
I’m partial to the bouncer pattern.
Toggling color representations is left as an exercise for the reader. You might pass this.rgb into renderPalette and then populate <code> with either color.hex or color.rgb, perhaps employing this utility:
function formatRGB(value) {
  return value.split(",").
    map(num => num.toString().padStart(3, " ")).
    join(", ");
}

This now produces interesting (annoying, really) behavior:

See the Pen Color Browser (defective) [forked] by FND.

Entering a query seems impossible as the input field loses focus after a change takes place, leaving the input field empty. However, entering an uncommon character (e.g. “v”) makes it clear that something is happening: The list of colors does indeed change.

The reason is that our current do-it-yourself (DIY) approach is quite crude: #render erases and recreates the DOM wholesale with each change. Discarding existing DOM nodes also resets the corresponding state, including form fields’ value, focus, and scroll position. That’s no good!

Incremental Rendering

The previous section’s data-driven UI seemed like a nice idea: Markup structures are defined once and re-rendered at will, based on a data model cleanly representing the current state. Yet our component’s explicit state is clearly insufficient; we need to reconcile it with the browser’s implicit state while re-rendering.

Sure, we might attempt to make that implicit state explicit and incorporate it into our data model, like including a field’s value or checked properties. But that still leaves many things unaccounted for, including focus management, scroll position, and myriad details we probably haven’t even thought of (frequently, that means accessibility features). Before long, we’re effectively recreating the browser!

We might instead try to identify which parts of the UI need updating and leave the rest of the DOM untouched. Unfortunately, that’s far from trivial, which is where libraries like React came into play more than a decade ago: On the surface, they provided a more declarative way to define DOM structures4 (while also encouraging componentized composition, establishing a single source of truth for each individual UI pattern). Under the hood, such libraries introduced mechanisms5 to provide granular, incremental DOM updates instead of recreating DOM trees from scratch — both to avoid state conflicts and to improve performance6.

4 In this context, that essentially means writing something that looks like HTML, which, depending on your belief system, is either essential or revolting. The state of HTML templating was somewhat dire back then and remains subpar in some environments.
5 Nolan Lawson’s “Let’s learn how modern JavaScript frameworks work by building one” provides plenty of valuable insights on that topic. For even more details, lit-html’s developer documentation is worth studying.
6 We’ve since learned that some of those mechanisms are actually ruinously expensive.

The bottom line: If we want to encapsulate markup definitions and then derive our UI from a variable data model, we kinda have to rely on a third-party library for reconciliation.

Actus Imperatus

At the other end of the spectrum, we might opt for surgical modifications. If we know what to target, our application code can reach into the DOM and modify only those parts that need updating.

Regrettably, though, that approach typically leads to calamitously tight coupling, with interrelated logic being spread all over the application while targeted routines inevitably violate components’ encapsulation. Things become even more complicated when we consider increasingly complex UI permutations (think edge cases, error reporting, and so on). Those are the very issues that the aforementioned libraries had hoped to eradicate.

In our color browser’s case, that would mean finding and hiding color entries that do not match the query, not to mention replacing the list with a substitute message if no matching entries remain. We’d also have to swap color representations in place. You can probably imagine how the resulting code would end up dissolving any separation of concerns, messing with elements that originally belonged exclusively to renderPalette.

class ColorBrowser extends HTMLElement {
  // [previous details omitted]

  handleEvent(ev) {
    // [previous details omitted]

    for(let item of this.#list.children) {
      item.hidden = !item.textContent.toLowerCase().includes(this.query);
    }
    if(this.#list.children.filter(el => !el.hidden).length === 0) {
      // inject substitute message
    }
  }

  #render() {
    // [previous details omitted]

    this.#list = renderPalette(this.colors);
  }
}

As a once wise man once said: That’s too much knowledge!

Things get even more perilous with form fields: Not only might we have to update a field’s specific state, but we would also need to know where to inject error messages. While reaching into renderPalette was bad enough, here we would have to pierce several layers: createField is a generic utility used by renderControls, which in turn is invoked by our top-level ColorBrowser.

If things get hairy even in this minimal example, imagine having a more complex application with even more layers and indirections. Keeping on top of all those interconnections becomes all but impossible. Such systems commonly devolve into a big ball of mud where nobody dares change anything for fear of inadvertently breaking stuff.

Conclusion

There appears to be a glaring omission in standardized browser APIs. Our preference for dependency-free vanilla JavaScript solutions is thwarted by the need to non-destructively update existing DOM structures. That’s assuming we value a declarative approach with inviolable encapsulation, otherwise known as “Modern Software Engineering: The Good Parts.”

As it currently stands, my personal opinion is that a small library like lit-html or Preact is often warranted, particularly when employed with replaceability in mind: A standardized API might still happen! Either way, adequate libraries have a light footprint and don’t typically present much of an encumbrance to end users, especially when combined with progressive enhancement.

I don’t wanna leave you hanging, though, so I’ve tricked our vanilla JavaScript implementation to mostly do what we expect it to:

See the Pen Color Browser [forked] by FND.

]]>
hello@smashingmagazine.com (Frederik Dohr)
<![CDATA[A Practical Guide To Designing For Colorblind People]]> https://smashingmagazine.com/2024/02/designing-for-colorblindness/ https://smashingmagazine.com/2024/02/designing-for-colorblindness/ Tue, 20 Feb 2024 12:00:00 GMT Smart Interface Design Patterns.]]> Too often, accessibility is seen as a checklist, but it’s much more complex than that. We might be using a good contrast for our colors, but then, if these colors are perceived very differently by people, it can make interfaces extremely difficult to use.

Depending on our color combinations, people with color weakness or who are colorblind won’t be able to tell them apart. Here are key points for designing with colorblindness — for better and more reliable color choices.

This article is part of our ongoing series on design patterns. It’s also a part of the video library on Smart Interface Design Patterns 🍣 and is available in the live UX training as well.

Colorweakness and Colorblindness

It’s worth stating that, like any other disability, colorblind users are on the spectrum, as Bela Gaytán rightfully noted. Each experience is unique, and different people perceive colors differently. The grades of colorblindness vary significantly, so there is no consistent condition that would be the same for everyone.

When we speak about colors, we should distinguish between two different conditions that people might have. Some people experience deficiencies in “translating” light waves into red-ish, green-ish or blue-ish colors. If one of these translations is not working properly, a person is at least colorweak. If the translation doesn’t work at all, a person is colorblind.

Depending on the color combinations we use, people with color weakness or people who are colorblind won’t be able to tell them apart. The most common use case is a red-/green deficiency, which affects 8% of European men and 0.5% of European women.

Note: the insights above come from “How Your Colorblind And Colorweak Readers See Your Colors,” a wonderful three-part series by Lisa Charlotte Muth on how colorblind and color weak readers perceive colors, things to consider when visualizing data and what it’s like to be colorblind.

Design Guidelines For Colorblindness

As Gareth Robins has kindly noted, the safe option is to either give people a colorblind toggle with shapes or use a friendly ubiquitous palette like viridis. Of course, we should never ever ask a colorblind person, “What color is this?” as they can’t correctly answer that question.

Red-/green deficiencies are more common in men.
Use blue if you want users to perceive color as you do.
✅ Use any 2 colors as long as they vary by lightness.
✅ Colorblind users can tell red and green apart.
✅ Colorblind users can’t tell dark green and brown apart.
✅ Colorblind users can’t tell red and brown apart.
✅ The safest color palette is to mix blue with orange or red.

🚫 Don’t mix red, green and brown together.
🚫 Don’t mix pink, turquoise and grey together.
🚫 Don’t mix purple and blue together.
🚫 Don’t use green and pink if you use red and blue.
🚫 Don’t mix green with orange, red, or blue of the same lightness.

Never Rely On Colors Alone

It’s worth noting that the safest bet is to never rely on colors alone to communicate data. Use labels, icons, shapes, rectangles, triangles, and stars to indicate differences and show relationships. Be careful when combining hues and patterns: patterns change how bright or dark colors will be perceived.

Who Can Use? is a fantastic little tool to quickly see how a color palette affects different people with visual impairments — from reduced sensitivity to red, to red/green blindness to cataracts, glaucoma, low vision and even situational events such as direct sunlight and night shift mode.

Use lightness to build gradients, not just hue. Use different lightnesses in your gradients and color palettes so readers with a color vision deficiency will still be able to distinguish your colors. And most importantly, always include colorweak and colorblind people in usability testing.

Useful Resources on Colorblindness

Useful Colorblindness Tools

Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training starting March 7. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Mobile Accessibility Barriers For Assistive Technology Users]]> https://smashingmagazine.com/2024/02/mobile-accessibility-barriers-assistive-technology-users/ https://smashingmagazine.com/2024/02/mobile-accessibility-barriers-assistive-technology-users/ Mon, 19 Feb 2024 10:00:00 GMT I often hear that native mobile app accessibility is more challenging than web accessibility. Teams don’t know where to start, where to find guidance on mobile accessibility, or how to prevent mobile-specific accessibility barriers.

As someone who works for a company with an active community of mobile assistive technology users, I get to learn about the challenges from the user’s perspective. In fact, I recently ran a survey with our community about their experiences with mobile accessibility, and I’d like to share what I learned with you.

If you only remember one thing from this article, make it this:

Half of assistive technology users said that accessibility barriers have a significant impact on their day-to-day well-being.

Accessibility goes beyond making products user-friendly. It can impact the quality of life for people with disabilities.

Types Of Mobile Assistive Technology

I typically group assistive technologies into three categories:

  1. Screen readers: software that converts information on a screen to speech or braille.
  2. Screen magnifiers: software or system settings to magnify the screen, increase contrast, and otherwise modify the content to make it easier to see.
  3. Alternative navigation: software and/or hardware that replaces an input device such as a keyboard, mouse, or touchscreen.

Across all categories of assistive technology, 81% of the people I surveyed change the accessibility settings on their smartphone and/or tablet. Examples of accessibility settings include the following:

  • Increasing the font size;
  • Turning on captions;
  • Extending the tap duration;
  • Inverting colours.

There are smartphone settings such as dark mode that benefit people with disabilities even though they aren’t considered accessibility settings.

Now, let’s dive into the specifics of each assistive technology category and learn more about the user preferences that shape their digital experiences.

Screen Reader Users

Both iPhone and Android smartphones come with a screen reader installed. On iPhone, the screen reader is VoiceOver, and on Android, it is TalkBack. Both screen readers allow users to explore by touching and dragging their fingers to hear content under their fingers read out loud or to swipe forwards and backward through all elements on the screen in a linear fashion. Both screen readers also let users navigate by headings or other types of elements.

The mobile screen reader users I surveyed tend to have several devices that work together to cover all their accessibility needs, and they support businesses that prioritize mobile accessibility.

  • Nearly half of screen reader users also own a smartwatch.
  • Half use an external keyboard with their smartphone, and a third use a braille display.
  • Almost all factor the accessibility of apps and mobile sites into deciding which businesses to support.

That last point is really important! Accessibility truly inspires purchasing decisions and brand loyalty.

Screen Magnification Users

In addition to magnification, Android smartphones also have a variety of vision-related accessibility features that allow users to change screen colours and text sizes. The iPhone Magnifier app lets users apply colour filters, adjust brightness or contrast, and detect people or doors nearby.

My survey showed that screen magnification users had the highest percentage of tablet ownership, with 77% owning both a smartphone and a tablet. Alternative navigation users followed closely, with 62% owning a tablet, but only 42% of the screen reader users I surveyed own a tablet.

Screen magnification users are less likely to investigate the accessibility of paid apps before purchasing (63%) compared to screen reader and alternative navigation users (89% and 91%, respectively). I suspect this is because device magnification, contrast, and colour inversion settings may allow users to work around some design choices that make an app inaccessible.

Alternative Navigation Users

Switch Access (Android) and Switch Control (iOS) let users interact with their devices using one or more switches instead of the touchscreen. There are a variety of things you can use as a switch: an external device, keyboard, sounds, or the smartphone camera or buttons.

Item scan allows users to highlight items one by one and select an item in focus by activating the switch. Point and scan moves a horizontal line down from the top of the screen. When this line is over the desired element, the user selects their switch to stop it. A vertical line then moves from the left of the screen. When this line is also over the element, the user stops it with their switch. The user can then select the element in the cross hairs of the two lines. In addition to these two methods, users can also customize buttons to perform gestures such as swipe down or swipe left.

Android and iPhone devices can be controlled through Voice Access and Voice Control. Both allow users to speak commands to interact with their smartphone instead of using the touchscreen. The command “Say names” can expose labels that aren’t obvious. The command “Show numbers” allows users to say “tap two” to select the element labeled with the number 2. “Show grid” is a command often used as a last resort to select an element. This approach overlays a grid across their screen area and allows users to select the grid square where the element is in focus.

Alternative navigation users were least likely to own a smartwatch (26%) out of all three assistive technology categories, according to my survey. All the alternative navigation users that own a smartwatch, except for one, use it for health tracking. 24% use an external switch device with their smartphone.

Most Common Mobile Accessibility Barriers

Now that you know about some of the assistive technologies available on Android and iPhone devices, we can explore some specific challenges commonly encountered by users when navigating websites and native apps on their smartphones.

I’ll outline an inclusive development process that can help you discover barriers that are specific to your own app. If you need general tips on what to avoid right now, here are common mobile accessibility issues that assistive technology users encounter. To get this list, I asked the community to select up to three of their most challenging accessibility barriers on mobile.

Unlabelled Buttons Or Links

Unlabelled buttons and links are the number one challenge reported by assistive technology users. Screen reader users are impacted the most by unlabelled elements, but also people who use voice commands to interact with their smartphone.

Small Buttons Or Links

Buttons and links that are too small to tap with a finger or require great precision to select using switch functions are a challenge for anyone with mobility issues. Tiny buttons and links are also hard to see for anyone with low vision.

Gesture Interactions

Gestures like swipe to delete, tap and drag, and anything more complex than a simple tap or double tap can cause problems for many users. Gestures can be difficult to discover, and if you’re not a power mobile user, you may never figure them out. Your best bet is to include a button to perform the same action that a gesture can perform. Custom actions can expose more options, but only to assistive technology users and not to people with disabilities that may not use assistive technology, for example, cognitive disabilities.

Elements Blocking Parts Of The Screen

A chat button that is always hovering and may cover parts of the content. A sticky header or footer that takes up a big portion of the screen when the user zooms in or magnifies their screen. These screen blockers can make it very difficult or impossible for some users to view content.

Missing Error Messages

Keeping a submit button inactive until a form is correctly filled out is often used as an alternative to providing error messages. That approach can be a challenge for assistive technology users in particular, but also anyone with a cognitive disability or who isn’t tech-savvy. Sometimes, error messages exist, but they aren’t announced to screen reader users.

Resizing Text And Pinch And Zoom

When an app doesn’t respect the font size increases set by a user through accessibility settings, people who need larger text must find alternative ways to read content. Some websites disable pinch and zoom — a feature that is not just useful for enlarging text but is often used to see images better.

Other Mobile Accessibility Barriers

The accessibility barriers that weren’t mentioned as often but still represent significant challenges for assistive technology users include:

  • Low contrast
    If the contrast between text and background is low, it makes it harder for folks with low vision to read. Customizing contrast settings can make content more legible for a broader range of people.
  • No dark mode
    For some people, black text on a white background can be painful to the eyes or trigger migraines.
  • Fixed orientation
    Not being able to rotate from portrait to landscape can impact people who have their device in a fixed position on a wheelchair or people with low vision who use landscape mode to make text and images appear larger.
  • Missing captions
    No captions on videos were also cited as a barrier. This is one that I relate to personally, as I rely on captions myself because of my hearing disability.

I knew I couldn’t capture all of the mobile accessibility barriers in my list of choices, so I gave the survey respondents a free text field to enter their own. Here’s what they said:

  • Screen reader users encounter unlabelled images or labels that don’t make sense. AI-based image recognition technology can help but often can’t provide the same context that a designer would. Screen reader users also run into apps that unexpectedly move their screen reader’s focus, changing their location on the screen and causing confusion.
  • Voice Control users find apps and sites that aren’t responsive to their voice commands. They have to try alternate commands to activate interactive elements, sometimes slowing them down significantly.
  • Complex navigation, such as large, dynamic lists or menus that expand and collapse automatically, can be challenging to use with assistive technologies. There aren’t often workarounds to interacting with navigation, so this can influence whether a user will abandon an app or website.
Inclusive Design Approaches For Mobile

It’s important to avoid getting overwhelmed and not doing anything at all because mobile accessibility seems hard. Instead, focus on fixing the most critical issues first, then release, celebrate, and repeat the process.

Ideally, you’ll want to change your processes to avoid creating more accessibility issues in the future. Here’s a high-level process for inclusive app development:

  • Do research with users to understand how their assistive technology works and what challenges they have with your existing app.
  • Create designs for accessibility features such as font scaling and state and focus indicators.
  • Revise designs and get feedback from users that can be applied in development.
  • Annotate design files for accessibility based on user feedback and best practices.
  • Create a new build and use automated testing tools to find barriers.
  • Do manual QA testing on the new build using your phone’s accessibility settings.
  • Release a private build and test with users again before the production release.

Conclusion

Fixing and, more importantly, avoiding mobile accessibility barriers can be easier if you understand how assistive technologies work and the common challenges users encounter on mobile devices. Remember the key takeaway from the beginning of this article: half of the people surveyed felt accessibility barriers had a significant impact on their well-being. With that in mind, I encourage you not to let a lack of understanding of technical accessibility compliance hold you back from building inclusive apps and mobile-friendly websites.

When you look at accessibility from the lens of usability for everyone and learn from assistive technology users, you take a step towards empowering everyone to independently interact with your products and services, playing your part in building a more equitable Internet.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Kate Kalcevich)
<![CDATA[How Accessibility Standards Can Empower Better Chart Visual Design]]> https://smashingmagazine.com/2024/02/accessibility-standards-empower-better-chart-visual-design/ https://smashingmagazine.com/2024/02/accessibility-standards-empower-better-chart-visual-design/ Wed, 14 Feb 2024 15:00:00 GMT Data visualizations are graphics that leverage our visual system and innate capabilities to gather, accumulate, and process information in our environment, as shown in the animation in Figure 1.0.

Figure 1.0. An animation demonstrating our preattentive processing capability. Based on a lecture by Dr. Stephen Franconeri. (Large preview)

As a result, we’re able to quickly spot trends, patterns, and outliers in all the images we see. Can you spot the visual patterns in Figure 1.1?

In this example, there are patterns defined by the size of the shapes, the use of fills and borders, and the use of different types of shapes. These characteristics, or visual encodings, are the building blocks of visualizations. Good visualizations provide a glanceable view of a large data set we otherwise wouldn’t be able to comprehend.

Accessibility Challenges With Data Visualizations

Visualizations typically serve a wide array of use cases and can be quite complex. A lot of care goes into choosing the right encodings to represent each metric. Designers and engineers will use colors to draw attention to more important metrics or information and highlight outliers. Oftentimes, as these design decisions are made, considerations for people with vision disabilities are missed.

Vision disabilities affect hundreds of millions of people worldwide. For example, about 300 million people have color-deficient vision, and it’s a condition that affects 1 in 12 men.1

1 Colour Blind Awareness (2023)

Most people with these conditions don’t use assistive technology when viewing the data. Because of this, the visual design of the chart needs to meet them where they are.

Figure 1.2 is an example of a donut chart. At first glance, it might seem like the categorical color palette matches the theme of digital wellbeing. It’s calm, it’s cool, and it may even invoke a feeling of wellbeing.

Figure 1.3 highlights how this same chart will appear to someone with a protanopia condition. You’ll notice that it is slightly less readable because the Other and YouTube categories appearing at the top of the donut are indistinguishable from one another.

For someone with achromatopsia, the chart will appear as it does in Figure 1.4

In this case, I’d argue that the chart isn’t really telling us anything. It’s nearly impossible to read, and swapping it out for a data table would be arguably more useful. At this point, you might be wondering how to fix this. Where should you start?

Start With Web Standards

Web standards can help us improve our design. In this case, Web Content Accessibility Guidelines (WCAG) will provide the most comprehensive set of requirements to start with. Guidelines call for two considerations. First, all colors must achieve the proper contrast ratio with their neighboring elements. Second, visualizations need to use something other than color to convey meaning. This can be accomplished by including a second encoding or adding text, images, icons, or patterns. While this article focuses on achieving WCAG 2.1 standards, the same concepts can be used to achieve WCAG 2.2 standards.

Web Standards Challenges

Meeting the web standards is trickier than it might first seem. Let’s dive into a few examples showing how difficult it is to ensure data will be understood at a glance while meeting the standards.

Challenge 1: Color Contrast

According to the WCAG 2.1 (level AA) standards, graphics such as chart elements (lines, bars, areas, nodes, edges, links, and so on) should all achieve a minimum 3:1 contrast ratio with their neighboring elements. Neighboring elements may include other chart elements, interaction states, and the chart’s background. Incidentally, if you’re not sure your colors are achieving the correct minimum ratio, you can check your palette here. Additionally, all text elements should achieve a minimum 4.5:1 contrast ratio with their background. Figure 1.5 depicts a sample categorical color palette that follows the recommended standards.

This is quite a bold palette. When applying a compliant palette to a chart, it might look like the example in Figure 1.6.

While this example meets the color contrast requirements, there’s a tradeoff. The chart’s focal point is now lost. The red segments at the bottom of each stacked bar represent the most important metrics illustrated in this chart. They represent errors or a count of items that need your attention. Since the chart features bold colors, all of which are equally competing for our attention, it’s now more difficult to see the items that matter most.

Challenge 2: Dual Encodings, Or Conveying Meaning Without Color

To minimize reliance on color to convey meaning, WCAG 2.1 (level A) standards also call for the use of something other than color to convey meaning. This may be a pattern, texture, icon, text overlay, or an entirely different visual encoding.

It’s easy to throw a pattern on top of a categorical fill color and call it a day, as illustrated in Figure 1.7. But is the chart still readable? Is it glanceable? In this case, the segments appear to run into one another. In his book, The Visual Display of Quantitative Information, Edward Tufte describes the importance of minimizing chartjunk or unnecessary visual design elements that limit one’s ability to read the chart. This begs the question, do the WCAG standards encourage us to add unnecessary chartjunk to the visualization?

Following the standards verbatim can lead us down the path of creating a really noisy visualization.

Let The Standards Empower vs Constrain Design

Over the past several years, my working group at Google has learned that it’s easier to meet the WCAG visual design requirements when they’re considered at the beginning of the design process instead of trying to update existing charts to meet the standard. The latter approach leads to charts with unnecessary chart junk, just like the one previously depicted in Figure 1.7, and minimized usability. Considering accessibility first will enable you to create a visualization that’s not only accessible but useful. We’re calling this our accessibility-first approach to chart design. Now, let’s see some examples.

Solving For Color Contrast

Let’s revisit the color contrast requirement via the example in Figure 1.8. In this case, the most important metric is represented by the red segments appearing at the bottom of each bar in the series. The red color represents a count of items in a failing state. Since both colors in this palette compete for our attention, it’s difficult to focus on the metric that matters most. The chart is no longer glanceable.

Focus On Essential Elements Only

By stretching the standards a bit, we can balance a11y and glanceability a lot better. Only the visual elements essential for interpreting the visualization need to achieve the color contrast requirement. In the case of Figure 1.8, we can use borders that achieve the required contrast ratio while using lighter fills to the point of focus. In Figure 1.9, you’ll notice your attention now shifts down to the metrics that matter most.

Figure 1.9. ✅ DO: Consider using a combination of outlines and fills to meet contrast requirements while maintaining a focal point. (Large preview)

Dark Themes For The Win

Most designers I know love a good dark theme like the one used in Figure 2.0. It looks nice, and dark themes often result in visually stunning charts.

More importantly, a dark theme offers an accessibility advantage. When building on top of a dark background, we can use a wider array of color shades that will still achieve the minimum required contrast ratio.

According to an audit conducted by Google’s Data Accessibility Working Group, the 61 shades of the Google Material palette from 2018 achieved the minimum 3:1 contrast ratio when placed on a dark background. This is depicted in Figure 2.1. Only 40 shades of Google Material colors achieved the same contrast ratio when placed on a white background. The 50% increase in available shades when moving from a light background to a dark background makes a huge difference. Having access to more shades enables us to draw focus to items that matter most.

With this in mind, let’s revisit the earlier donut chart example in Figure 2.2. For now, let’s keep the white background, as it’s a core part of Google’s brand.

Figure 2.2. ✅ DO: Use a combination of fills and borders that achieve the minimum contrast ratios to improve the readability of your chart. (Large preview)

While this is a great first step, there’s still more work to do. Let’s take a closer look.

Solving For Dual Encodings And Minimizing Chartjunk

As shown in Figure 2.3, color is our only way of connecting segments in the donut to the corresponding categories in the legend. Despite our best efforts to follow color contrast standards, the chart can still be difficult to read for people with certain vision disabilities. We need a dual encoding, or something other than color, to convey meaning.

How might we do this without adding noise or reducing the chart’s readability or glanceability? Let’s start with the text.

Integrating Text And Icons

Adding text to a visualization is a great way to solve the dual encoding problem. Let’s use our donut chart as an example. If we move the legend labels into the graph, as illustrated in Figure 2.4, we can visually connect them to their corresponding segments. As a result, there is no longer a need for a legend, and the labels become the second encoding.

Let’s look at a few other ways to provide a dual encoding while maximizing readability. This will prevent us from running in the direction of applying unnecessary chart junk like the example previously highlighted in Figure 1.7.

Depending on the situation, shape of the data, or the available screen real estate, we may not have the luxury of overlaying text on top of a visualization. In cases like in Figure 2.5, it’s still okay to use iconography. For example, if we’re dealing with a very limited number of categories, the added iconography can still act as a dual encoding.

Some charts can have upwards of hundreds of categories, which makes it difficult to add iconography or text. In these cases, we must revisit the purpose of the chart and decide if we need to differentiate categories. Perhaps color, along with a dual encoding, can be used to highlight other aspects of the data. The example in Figure 2.6 shows a line chart with hundreds of categories.

We did a few things with color to convey meaning here:

  1. Bright colors are used to depict outliers within the data set.
  2. A neutral gray color is applied to all nominal categories.

In this scenario, we can once again use a very limited set of shapes for differentiating specific categories.

The Benefits Of Small Multiples And Sparklines

There are still times when it’s important to differentiate between all categories depicted in a visualization. For example, the tangled mess of a chart is depicted in Figure 2.7.

In this case, a more accessible solution would include breaking the charts into their own mini charts or sparklines, as depicted in Figure 2.8. This solution is arguably better for everyone because it makes it easier to see the individual trend for each category. It’s more accessible because we’ve completely removed the reliance on color and appended text to each of the mincharts, which is better for the screen reader experience.

Reserve Fills For Items That Need Your Attention

Earlier, we examined using a combination of fills and outlines to achieve color contrast requirements. Red and green are commonly used to convey status. For someone who is red/green colorblind, this can be very problematic. As an alternative, the status icons in Figure 2.9 reserve fills for the items that need your attention. We co-designed this solution with some help from customers who are colorblind. It’s arguably more scannable for people who are fully sighted, too.

Embracing Relevant Metaphors

In 2022, we launched a redesigned Fitbit mobile app for the masses. One of my favorite visualizations from this launch is a chart showing your heart rate throughout the day. As depicted in Figure 3.0, this chart shows when your heart rate crosses into different zones. Dotted lines were used to depict each of these zone thresholds. We used the spacing between the dots as our dual encoding, which invokes a feeling of a “visual” heartbeat. Threshold lines with closely spaced dots imply a higher heart rate.

Continuing the theme of using fun, relevant metaphors, we even based our threshold spacing on the Fibonacci Sequence. This enabled us to represent each threshold with a noticeably different visual treatment. For this example, we knew we were on the right track as these accessibility considerations tested well with people who have color-deficient vision.

Accessible Interaction States

Color contrast and encodings also need to be considered when showing interactions like mouse hover, selection, and keyboard focus, like the examples in Figure 3.1. The same rules apply here. In this example, the hover, focus, and clicked state of each bar is delineated by elements that appear above and below the bar. As a result, these elements only need to achieve a 3:1 contrast ratio with the white background and not the bars themselves. Not only did this pattern test well in multiple usability studies, but it was also designed so that the states could overlap. For example, the hover state and selected state can appear simultaneously and still meet accessibility requirements.

Finding Your Inspiration

For some more challenging projects, we’ve taken inspiration from unexpected areas.

For example, we looked to nature (Figure 3.2) to help us consider methods for visualizing the effects of cloud moisture on an LTE network, as sketched in Figure 3.3.

We’ve taken inspiration from halftone printing processes (Figure 3.4) to think about how we might reimagine a heatmap with a dual encoding, as depicted in Figure 3.5.

We’ve also taken inspiration from architecture and how people move through buildings (Figure 3.6) to consider methods for showing the scope and flow of data into a donut chart as depicted in Figure 3.7.

Figure 3.7. Applying inspiration from architecture and a building’s flow. (Large preview)

In this case, the animated inner ring highlights the scope of the donut chart when it’s empty and indicates that it will fill up to 100%. Animation is a great technique, but it presents other accessibility challenges and should either time out or have a stop control.

In some cases, we were even inspired to explore new versions of existing visualization types, like the one depicted in Figure 3.8. This case study highlights a step-by-step guide to how we landed on this example.

Getting People On Board With Accessibility

One key lesson is that it’s important to get colleagues on board with accessibility as soon as possible. Your compliant designs may not look quite as pretty as your non-compliant designs and may be open to criticism.

So, how can you get your colleagues on board? For starters, evangelism is key. Provide examples like the ones included here, which can help your colleagues build empathy for people with vision disabilities. Find moments to share the work with your company’s leadership team, spreading awareness. Team meetings, design critiques, AMA sessions, organization forums, and all-hands are a good start. Oftentimes, colleagues may not fully understand how accessibility requirements apply to charting or how their visualizations are used by people with disabilities.

While share-outs are a great start, communication is one way. We found that it’s easier to build momentum when you invite others to participate in the design process. Invite them into brainstorming meetings, design reviews, codesign sessions, and the problem space to help them appreciate how difficult these challenges are. Enlist their help, too.

By engaging with colleagues, we were able to pinpoint our champions within the group or those people who were so passionate about the topic they were willing to spend extra time building demos, prototypes, design specs, and research repositories. For example, at Google, we were able to publish our Top Tips for Data Accessibility on the Material Design blog.

Aside from good citizenship and building a grassroots start, there are ways to get the business on board. Pointing to regulations like Section 508 in America and the European Accessibility Act are other good ways to encourage your business to dive deeper into your product’s accessibility. It’s also an effective mechanism for getting funding and ensuring accessibility is on your product’s roadmap. Once you’ve made the business case and you’ve identified the accessibility champions on your team, it’s time to start designing.

Conclusion

Accessibility is more than compliance. Accessibility considerations can and will benefit everyone, so it’s important not to shove them into a special menu or mode or forget about them until the end of the design process. When you consider accessibility from the start, the WCAG standards also suddenly seem a lot less constraining than when you try to retrofit existing charts for accessibility.

The examples here were built over the course of 3 years, and they’re based on valuable lessons learned along the way. My hope is that you can use the tested designs in this article to get a head start. And by taking an accessibility-first approach, you’ll end up with overall better data visualizations — ones that fully take into account how all people gather, accumulate, and process information.

Resources

To get started thinking about data accessibility, check out some of these resources:

Getting started

ACM

Contrast checking tool

WCAG requirements

Material design best practices and specs

We’re incredibly proud of our colleagues who contributed to the research and examples featured in this article. This includes Andrew Carter, Ben Wong, Chris Calo, Gerard Rocha, Ian Hill, Jenifer Kozenski Devins, Jennifer Reilly, Kai Chang, Lisa Kaggen, Mags Sosa, Nicholas Cottrell, Rebecca Plotnick, Roshini Kumar, Sierra Seeborn, and Tyler Williamson. Without everyone’s contributions, we wouldn’t have been able to advance our knowledge of accessible chart visual design.

]]>
hello@smashingmagazine.com (Kent Eisenhuth)
<![CDATA[A Practical Guide To Designing For Children]]> https://smashingmagazine.com/2024/02/practical-guide-design-children/ https://smashingmagazine.com/2024/02/practical-guide-design-children/ Tue, 13 Feb 2024 12:00:00 GMT Children start interacting with the web when they are 3–5 years old. How do we design for children? What do we need to keep in mind while doing so? And how do we meet the expectations of the most demanding users you can possibly find: parents? Well, let’s find out.

This article is part of our ongoing series on design patterns. It’s also a part of the video library on Smart Interface Design Patterns 🍣 and is available in the live UX training as well.

Embrace Playing, Encourage Small Wins

Designing for children is difficult. Children tend to lose focus and motivation. They quit when they get bored, and they move on if they can’t get anywhere quickly enough. They need steady achievements. So, as designers, we need to appreciate, reward, and encourage small wins to develop habits and support learning — with progress tracking and gamification.

As Deb Gelman, author of Design for Kids discovered, children communicate volumes by how they play, what they choose to play with, how long they choose to play with it, and when they decide to play with something else. Yet they don’t get as disappointed when something isn’t working. They just choose to browse or play something else.

Most importantly, we can’t design for children without testing. Even better: bring children to co-design their digital experiences and bring parents to co-design reliable guardrails. With a fun, smart, and safe product, parents will spread the word about you faster than you ever could.

Always Focus On A Two-Year Age Range

As designers, we should always keep in mind that “children” represent a very diverse range of behaviors and abilities. There are vast differences between age groups (3–5, 6–8, and 9–12) — both in terms of how users navigate but also how we communicate to them.

In general, when designing for children, focus on a two-year age range, max. These days, children start interacting with the web at the age of 3–5. And the very first interactions they learn are swipe, scroll, video controls, and “Home”.

Getting Parents On Board

Whenever you are designing for children, you always design for parents as well. In fact, parents are the most demanding users you can find. They have absolutely no mercy in reviews, requests, complaints, and ratings on the stores.

That’s not surprising. Parents need safety guarantees, regulations, and certifications — a sort of reassurance that you treat their safety and privacy seriously, especially when it comes to third-party integrations or advertising. In fact, they are often willing to pay for apps more just to avoid advertising.

Parents often value reviews from teachers, educators, doctors, and other parents like themselves. And they need to have parental controls to set specific time limits, rules, access, and permissions.

As Rodrigo Seoane has pointed out in an email, a major concern to keep in mind is “how the majority of initiatives for kids rely on and create dependencies on extrinsic motivations. The reward model keeps their attention in the short term, but as a core gamified mechanic, it is problematic in the long run, reducing their cognitive capacity and creating a barrier to developing any intrinsic motivation.” So whenever possible, design to increase intrinsic motivation.

Design Guidelines For Children-Friendly Design
  • Design large text (18–19px) with large tap targets (min 75×75px).
  • Use typefaces that approximate how children learn to write.
  • Translate text into attractive visuals, icons, sounds, characters.
  • Avoid bottom buttons as kids tap on them by mistake all the time.
  • Children expect feedback on every single action they perform.
  • Don’t patronize: show age-appropriate content for the age range you’re designing for.
  • Be transparent: children can’t distinguish ads or promotions from real content.
  • You’re always designing for both children and parents.
  • Parents are the most demanding users you can find. They have no mercy in reviews and ratings.
  • Design parental controls for time limits, rules, and access.
  • Instead of extrinsic rewards, design to increase intrinsic motivation.

Useful Resources

Digital Toolkits For Children

Books and eBooks

Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training starting March 8. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[How To Draw Radar Charts In Web]]> https://smashingmagazine.com/2024/02/draw-radar-charts-web/ https://smashingmagazine.com/2024/02/draw-radar-charts-web/ Fri, 09 Feb 2024 15:00:00 GMT I got to work with a new type of chart for data visualization called a radar chart when a project asked for it. It was new to me, but the idea is that there is a circular, two-dimensional circle with plots going around the chart. Rather than simple X and Y axes, each plot on a radar chart is its own axis, marking a spot between the outer edge of the circle and the very center of it. The plots represent some sort of category, and when connecting them together, they are like vertices that form shapes to help see the relationship of category values, not totally unlike the vectors in an SVG.

Sometimes, the radar chart is called a spider chart, and it’s easy to see why. The axes that flow outward intersect with the connected plots and form a web-like appearance. So, if your Spidey senses were tingling at first glance, you know why.

You already know where we’re going with this: We’re going to build a radar chart together! We’ll work from scratch with nothing but HTML, CSS, and JavaScript. But before we go there, it’s worth noting a couple of things about radar charts.

First, you don’t have to build them from scratch. Chart.js and D3.js are readily available with convenient approaches that greatly simplify the process. Seeing as I needed just one chart for the project, I decided against using a library and took on the challenge of making it myself. I learned something new, and hopefully, you do as well!

Second, there are caveats to using radar charts for data visualization. While they are indeed effective, they can also be difficult to read when multiple series stack up. The relationships between plots are not nearly as decipherable as, say, bar charts. The order of the categories around the circle affects the overall shape, and the scale between series has to be consistent for drawing conclusions.

That all said, let’s dive in and get our hands sticky with data plots.

The Components

The thing I like immediately about radar charts is that they are inherently geometrical. Connecting plots produces a series of angles that form polygon shapes. The sides are straight lines. And CSS is absolutely wonderful for working with polygons given that we have the CSS polygon() function for drawing them by declaring as many points as we need in the function’s arguments.

We will start with a pentagonal-shaped chart with five data categories.

See the Pen Radar chart (Pentagon) [forked] by Preethi Sam.

There are three components we need to establish in HTML before we work on styling. Those would be:

  1. Grids: These provide the axes over which the diagrams are drawn. It’s the spider web of the bunch.
  2. Graphs: These are the polygons we draw with the coordinates of each data plot before coloring them in.
  3. Labels: The text that identifies the categories along the graphs’ axes.

Here’s how I decided to stub that out in HTML:

<!-- GRIDS -->
<div class="wrapper">
  <div class="grids polygons">
    <div></div>
  </div>
  <div class="grids polygons">
    <div></div>
  </div>
  <div class="grids polygons">
    <div></div>
  </div>
</div>

<!-- GRAPHS -->
<div class="wrapper">
  <div class="graphs polygons">
    <div><!-- Set 1 --></div>
  </div>
  <div class="graphs polygons">
    <div><!-- Set 2 --></div>
  </div>
  <div class="graphs polygons">
    <div><!-- Set 3 --></div>
  </div>
  <!-- etc. -->
</div>

<!-- LABELS -->
<div class="wrapper">
  <div class="labels">Data A</div>
  <div class="labels">Data B</div>
  <div class="labels">Data C</div>
  <div class="labels">Data D</div>
  <div class="labels">Data E</div>
  <!-- etc. -->
</div>

I’m sure you can read the markup and see what’s going on, but we’ve got three parent elements (.wrapper) that each holds one of the main components. The first parent contains the .grids, the second parent contains the .graphs, and the third parent contains the .labels.

Base Styles

We’ll start by setting up a few color variables we can use to fill things in as we go:

:root {
  --color1: rgba(78, 36, 221, 0.6); /* graph set 1 */
  --color2: rgba(236, 19, 154, 0.6); /* graph set 2 */
  --color3: rgba(156, 4, 223, 0.6); /* graph set 3 */
  --colorS: rgba(255, 0, 95, 0.1); /* graph shadow */
}

Our next order of business is to establish the layout. CSS Grid is a solid approach for this because we can place all three grid items together on the grid in just a couple of lines:

/* Parent container */
.wrapper { display: grid; }

/* Placing elements on the grid */
.wrapper > div {
  grid-area: 1 / 1; /* There's only one grid area to cover */
}

Let’s go ahead and set a size on the grid items. I’m using a fixed length value of 300px, but you can use any value you need and variablize it if you plan on using it in other places. And rather than declaring an explicit height, let’s put the burden of calculating a height on CSS using aspect-ratio to form perfect squares.

/* Placing elements on the grid */
.wrapper div {
  aspect-ratio: 1 / 1;
  grid-area: 1 / 1;
  width: 300px;
}

We can’t see anything just yet. We’ll need to color things in:

/* ----------
Graphs
---------- */
.graphs:nth-of-type(1) > div { background: var(--color1); }
.graphs:nth-of-type(2) > div { background: var(--color2); }
.graphs:nth-of-type(3) > div { background: var(--color3); }

.graphs {
  filter: 
    drop-shadow(1px 1px 10px var(--colorS))
    drop-shadow(-1px -1px 10px var(--colorS))
    drop-shadow(-1px 1px 10px var(--colorS))
    drop-shadow(1px -1px 10px var(--colorS));
}

/* --------------
Grids 
-------------- */
.grids {
  filter: 
    drop-shadow(1px 1px 1px #ddd)
    drop-shadow(-1px -1px 1px #ddd)
    drop-shadow(-1px 1px 1px #ddd)
    drop-shadow(1px -1px 1px #ddd);
    mix-blend-mode: multiply;
}

.grids > div { background: white; }

Oh, wait! We need to set widths on the grids and polygons for them to take shape:

.grids:nth-of-type(2) { width: 66%; }
.grids:nth-of-type(3) { width: 33%; }

/* --------------
Polygons 
-------------- */
.polygons { place-self: center; }
.polygons > div { width: 100%; }

Since we’re already here, I’m going to position the labels a smidge and give them width:

/* --------------
Labels
-------------- */
.labels:first-of-type { inset-block-sptart: -10%; }

.labels {
  height: 1lh;
  position: relative;
  width: max-content;
}

We still can’t see what’s going on, but we can if we temporarily draw borders around elements.

See the Pen Radar chart layout [forked] by Preethi Sam.

All combined, it doesn’t look all that great so far. Basically, we have a series of overlapping grids followed by perfectly square graphs stacked right on top of one another. The labels are off in the corner as well. We haven’t drawn anything yet, so this doesn’t bother me for now because we have the HTML elements we need, and CSS is technically establishing a layout that should come together as we start plotting points and drawing polygons.

More specifically:

  • The .wrapper elements are displayed as CSS Grid containers.
  • The direct children of the .wrapper elements are divs placed in the exact same grid-area. This is causing them to stack one right on top of the other.
  • The .polygons are centered (place-self: center).
  • The child divs in the .polygons take up the full width (width:100%).
  • Every single div is 300px wide and squared off with a one-to-one aspect-ratio.
  • We’re explicitly declaring a relative position on the .labels. This way, they can be automatically positioned when we start working in JavaScript.

The rest? Simply apply some colors as backgrounds and drop shadows.

Calculating Plot Coordinates

Don’t worry. We are not getting into a deep dive about polygon geometry. Instead, let’s take a quick look at the equations we’re using to calculate the coordinates of each polygon’s vertices. You don’t have to know these equations to use the code we’re going to write, but it never hurts to peek under the hood to see how it comes together.

x1 = x + cosθ1 = cosθ1 if x=0
y1 = y + sinθ1 = sinθ1 if y=0
x2 = x + cosθ2 = cosθ2 if x=0
y2 = y + sinθ2 = sinθ2 if y=0
etc.

x, y = center of the polygon (assigned (0, 0) in our examples)

x1, x2… = x coordinates of each vertex (vertex 1, 2, and so on)
y1, y2… = y coordinates of each vertex
θ1, θ2… = angle each vertex makes to the x-axis

We can assume that 𝜃 is 90deg (i.e., 𝜋/2) since a vertex can always be placed right above or below the center (i.e., Data A in this example). The rest of the angles can be calculated like this:

n = number of sides of the polygon

𝜃1 = 𝜃0 + 2𝜋/𝑛 = 𝜋/2 + 2𝜋/𝑛
𝜃2 = 𝜃0 + 4𝜋/𝑛 = 𝜋/2 + 4𝜋/𝑛
𝜃3 = 𝜃0 + 6𝜋/𝑛 = 𝜋/2 + 6𝜋/𝑛
𝜃3 = 𝜃0 + 8𝜋/𝑛 = 𝜋/2 + 8𝜋/𝑛
𝜃3 = 𝜃0 + 10𝜋/𝑛 = 𝜋/2 + 10𝜋/𝑛

Armed with this context, we can solve for our x and y values:

x1 = cos(𝜋/2 + 2𝜋/# sides)
y1 = sin(𝜋/2 + 2𝜋/# sides)
x2 = cos(𝜋/2 + 4𝜋/# sides)
y2 = sin(𝜋/2 + 4𝜋/# sides)
etc.

The number of sides depends on the number of plots we need. We said up-front that this is a pentagonal shape, so we’re working with five sides in this particular example.

x1 = cos(𝜋/2 + 2𝜋/5)
y1 = sin(𝜋/2 + 2𝜋/5)
x2 = cos(𝜋/2 + 4𝜋/5)
y2 = sin(𝜋/2 + 4𝜋/5)
etc.
Drawing Polygons With JavaScript

Now that the math is accounted for, we have what we need to start working in JavaScript for the sake of plotting the coordinates, connecting them together, and painting in the resulting polygons.

For simplicity’s sake, we will leave the Canvas API out of this and instead use regular HTML elements to draw the chart. You can, however, use the math outlined above and the following logic as the foundation for drawing polygons in whichever language, framework, or API you prefer.

OK, so we have three types of components to work on: grids, graphs, and labels. We start with the grid and work up from there. In each case, I’ll simply drop in the code and explain what’s happening.

Drawing The Grid

// Variables
let sides = 5; // # of data points
let units = 1; // # of graphs + 1
let vertices = (new Array(units)).fill(""); 
let percents = new Array(units);
percents[0] = (new Array(sides)).fill(100); // for the polygon's grid component
let gradient = "conic-gradient(";
let angle = 360/sides;

// Calculate vertices
with(Math) { 
  for(i=0, n = 2 * PI; i < sides; i++, n += 2 * PI) {
    for(j=0; j < units; j++) {
      let x = ( round(cos(-1 * PI/2 + n/sides) * percents[j][i]) + 100 ) / 2; 
      let y = ( round(sin(-1 * PI/2 + n/sides) * percents[j][i]) + 100 ) / 2; 
      vertices[j] += ${x}% ${y} ${i == sides - 1 ? '%':'%, '};
  }
  gradient += white ${
    (angle &#42; (i+1)) - 1}deg,
    #ddd ${ (angle &#42; (i+1)) - 1 }deg,
    #ddd ${ (angle &#42; (i+1)) + 1 }deg,
    white ${ (angle &#42; (i+1)) + 1 }deg,;}
}

// Draw the grids
document.querySelectorAll('.grids>div').forEach((grid,i) => {
  grid.style.clipPath =polygon(${ vertices[0] });
});
document.querySelector('.grids:nth-of-type(1) > div').style.background =${gradient.slice(0, -1)} );

Check it out! We already have a spider web.

See the Pen Radar chart (Grid) [forked] by Preethi Sam.

Here’s what’s happening in the code:

  1. sides is the number of sides of the chart. Again, we’re working with five sides.
  2. vertices is an array that stores the coordinates of each vertex.
  3. Since we are not constructing any graphs yet — only the grid — the number of units is set to 1, and only one item is added to the percents array at percents[0]. For grid polygons, the data values are 100.
  4. gradient is a string to construct the conic-gradient() that establishes the grid lines.
  5. angle is a calculation of 360deg divided by the total number of sides.

From there, we calculate the vertices:

  1. i is an iterator that cycles through the total number of sides (i.e., 5).
  2. j is an iterator that cycles through the total number of units (i.e., 1).
  3. n is a counter that counts in increments of 2*PI (i.e., 2𝜋, 4𝜋, 6𝜋, and so on).

The x and y values of each vertex are calculated as follows, based on the geometric equations we discussed earlier. Note that we multiply 𝜋 by -1 to steer the rotation.

cos(-1 * PI/2 + n/sides) // cos(𝜋/2 + 2𝜋/sides), cos(𝜋/2 + 4𝜋/sides)...
sin(-1 * PI/2 + n/sides) // sin(𝜋/2 + 2𝜋/sides), sin(𝜋/2 + 4𝜋/sides)...

We convert the x and y values into percentages (since that is how the data points are formatted) and then place them on the chart.

let x = (round(cos(-1 * PI/2 + n/sides) * percents[j][i]) + 100) / 2;
let y = (round(sin(-1 * PI/2 + n/sides) * percents[j][i]) + 100) / 2;

We also construct the conic-gradient(), which is part of the grid. Each color stop corresponds to each vertex’s angle — at each of the angle increments, a grey (#ddd) line is drawn.

gradient += 
  `white ${ (angle * (i+1)) - 1 }deg,
   #ddd ${ (angle * (i+1)) - 1 }deg,
   #ddd ${ (angle * (i+1)) + 1 }deg,
   white ${ (angle * (i+1)) + 1 }deg,`

If we print out the computed variables after the for loop, these will be the results for the grid’s vertices and gradient:

console.log(polygon( ${vertices[0]} )); /* grid’s polygon */
// polygon(97.5% 34.5%, 79.5% 90.5%, 20.5% 90.5%, 2.5% 34.5%, 50% 0%)

console.log(gradient.slice(0, -1)); /* grid’s gradient */
// conic-gradient(white 71deg, #ddd 71deg,# ddd 73deg, white 73deg, white 143deg, #ddd 143deg, #ddd 145deg, white 145deg, white 215deg, #ddd 215deg, #ddd 217deg, white 217deg, white 287deg, #ddd 287deg, #ddd 289deg, white 289deg, white 359deg, #ddd 359deg, #ddd 361deg, white 361deg

These values are assigned to the grid’s clipPath and background, respectively, and thus the grid appears on the page.

The Graph

// Following the other variable declarations 
// Each graph's data points in the order [B, C, D... A] 
percents[1] = [100, 50, 60, 50, 90]; 
percents[2] = [100, 80, 30, 90, 40];
percents[3] = [100, 10, 60, 60, 80];

// Next to drawing grids
document.querySelectorAll('.graphs > div').forEach((graph,i) => {
  graph.style.clipPath =polygon( ${vertices[i+1]} );
});

See the Pen Radar chart (Graph) [forked] by Preethi Sam.

Now it looks like we’re getting somewhere! For each graph, we add its set of data points to the percents array after incrementing the value of units to match the number of graphs. And that’s all we need to draw graphs on the chart. Let’s turn our attention to the labels for the moment.

The Labels

// Positioning labels

// First label is always set in the top middle
let firstLabel = document.querySelector('.labels:first-of-type');
firstLabel.style.insetInlineStart =calc(50% - ${firstLabel.offsetWidth / 2}px);

// Setting labels for the rest of the vertices (data points). 
let v = Array.from(vertices[0].split(' ').splice(0, (2 * sides) - 2), (n)=> parseInt(n)); 

document.querySelectorAll('.labels:not(:first-of-type)').forEach((label, i) => {
  let width = label.offsetWidth / 2; 
  let height = label.offsetHeight;
  label.style.insetInlineStart = calc( ${ v[i&#42;2] }% + ${ v[i&#42;2] &lt; 50 ? - 3&#42;width : v[i&#42;2] == 50 ? - width: width}px );
  label.style.insetBlockStart = calc( ${ v[(i&#42;2) + 1] }% - ${ v[(i &#42; 2) + 1] == 100 ? - height: height / 2 }px );
});

The positioning of the labels is determined by three things:

  1. The coordinates of the vertices (i.e., data points) they should be next to,
  2. The width and height of their text, and
  3. Any blank space needed around the labels so they don’t overlap the chart.

All the labels are positioned relative in CSS. By adding the inset-inline-start and inset-block-start values in the script, we can reposition the labels using the values as coordinates. The first label is always set to the top-middle position. The coordinates for the rest of the labels are the same as their respective vertices, plus an offset. The offset is determined like this:

  1. x-axis/horizontal
    If the label is at the left (i.e., x is less than 50%), then it’s moved towards the left based on its width. Otherwise, it’s moved towards the right side. As such, the right or left edges of the labels, depending on which side of the chart they are on, are uniformly aligned to their vertices.
  2. y-axis/vertical
    The height of each label is fixed. There’s not much offset to add except maybe moving them down half their height. Any label at the bottom (i.e., when y is 100%), however, could use additional space above it for breathing room.

And guess what…

We’re Done!

See the Pen Radar chart (Pentagon) [forked] by Preethi Sam.

Not too shabby, right? The most complicated part, I think, is the math. But since we have that figured out, we can practically plug it into any other situation where a radar chart is needed. Need a four-point chart instead? Update the number of vertices in the script and account for fewer elements in the markup and styles.

In fact, here are two more examples showing different configurations. In each case, I’m merely increasing or decreasing the number of vertices, which the script uses to produce different sets of coordinates that help position points along the grid.

Need just three sides? All that means is two fewer coordinate sets:

See the Pen Radar chart (Triangle) [forked] by Preethi Sam.

Need seven sides? We’ll produce more coordinate sets instead:

See the Pen Radar chart (Heptagon) [forked] by Preethi Sam.

]]>
hello@smashingmagazine.com (Preethi Sam)
<![CDATA[Frequently Heard In My Beginning Front-End Web Development Class]]> https://smashingmagazine.com/2024/02/frequently-heard-beginning-front-end-web-development-class/ https://smashingmagazine.com/2024/02/frequently-heard-beginning-front-end-web-development-class/ Thu, 08 Feb 2024 13:00:00 GMT I felt uninspired for a spell in 2019 and decided to enroll in a beginning-level community college course on web development as a way to “spice” things up, sort of like going backwards in order to move forwards. I had no interest in being an old dog learning new tricks; what I wanted was to look at front-end development through the eyes of a beginner in 2019 after having been a beginner in 2003.

Fast-forward five years, and I’m now teaching that class for the same college, as well as three others. What I gained by reprising my student status is an obsession with “a-ha!” moments. It’s the look in a student’s eyes when something “clicks” and new knowledge is developed. With the barrier to learning web development seemingly getting higher all the time, here I am, making merry with the basics. (That linked post to Rachel’s blog is what spurred me to go back to school.)

With several years of teaching under my belt, I have plenty of opinions about the learning landscape for web development. But what I am more interested in continues to be vicarious living through the eyes of my entry-level students and the consistent sparks of knowledge they make.

Questions are often the precursor to an “a-ha!” moment. And my students ask some pretty darn interesting questions every term, without fail, questions that have forced me to reconsider not only how I approach curriculum and instruction but also how other people think about The Web™ as a whole.

I’ve made a practice of collecting handfuls of student questions and comments. That way, I can reflect on how I might respond or answer them for future students and reference them as I write and update my lesson plans. I thought I’d share a few of them because, I hope, it will give you an idea of what those getting into the field are curious about. I think you’ll find that as many of us debate and decry the value of JavaScript frameworks, Core Web Vitals, AI, and whether Typescript is a necessary evil, the people cracking into web development are asking the most interesting questions in the field and are making way different connections than those of us who have spent forever on the front end.

These are pulled straight from students in the current Spring term. We’re only three weeks into the semester, but check out what sorts of things are already swirling around their minds as we discuss semantics, accessibility, and writing modes.

“I really never thought of this; however, code could be inclusive, and how coding could express empathy. While reading this portion of the context, I was thinking about my Kindle and how the Kindle can have audio, change my font style, larger/smaller font, and lighting. All of this helps me to read and navigate my books better depending on my surroundings and how much accessibility I will need. For example, when I am driving, I love my audiobooks, and at night, I use my dim setting and change font size because it’s the end of the day, and my eyes don’t want to do too much work reading smaller text. It’s really fascinating that coding can do all of this.”
“If we are confused about our coding and it doesn’t make sense to us, it will definitely confuse the readers, which is the opposite of our end goal, accessibility. There are also times when we might want to use <div> where we could use <article> or <nav> or any of the other important elements. It’s essential to fully understand the elements and their uses in order to write the cleanest code.”
“Tackling CSS logical properties this week felt like a juggling act, trying to keep all those new concepts in the air. Swapping left and right for inline-start and inline-end is a bit confusing, but it’s cool to see how it makes websites more welcoming for different languages.”
“What are the legal implications of website liability? I couldn’t imagine the size of a class action lawsuit that Facebook would get smacked with if a rogue developer decided to pin a gif of a strobe light to the top of the world’s newsfeeds. Are websites subject to the same legislation that requires buildings to have wheelchair ramps?”
“Sometimes, I wonder how to make all this new stuff work on old browsers that might not get what I’m trying to do. I also get stuck when my page looks great in one language but breaks in another. What’s the best way to check my work in different languages or writing modes?”
“One of the big things that really got me stoked was learning how to make content in Flexbox the same size using flex or flex-basis. This was a really big thing for me last semester when I was working on my final project. I spent a lot of time trying to figure out how to make the content in Webflow equal in size.”
“Hearing the terms “Writing Modes” and “Logical Properties” in CSS was a bit of a concern at the beginning of this week. A lot of CSS I remember was beginning to come back, but these two were new. After going over the course modules, my concern lifted a lot, mainly because Writing Modes were the layout of text in a certain element. As simple as I thought it was, it was also very important considering how writing modes change in different countries. Learning how these writing modes change the flow of text showed how much more inclusion you could bring to a website, allowing for different languages to be filtered in.”
“Although in the previous course, we learned how flexbox and grid can be used to style interesting content on sites, we didn’t study how they were made with CSS. It was surprisingly simple to grasp the basic concepts of setting up a flexbox or grid and how their children can be positioned on a main axis and cross axis. I especially enjoyed setting up grids, as both methods are intuitive, and the concept of selecting the grid lines that an element sits in reminds me of how some programming languages implement arrays and ranges. Python, for instance, allows the user to select the last element of an array using -1 just as the grid-column: 1/-1; the property can specify that an element spans until the end of a row.”
“Logical Properties were intimidating at first, but it was just changing the code to make it make sense in a way. After learning CSS — a bit a while ago — Logical Properties seemed more modern, and I think I adapted to it quickly.”
“But as a whole, I could see the building of websites to be a very easy thing to automate, especially in this day and age. Perhaps that is why site builders do not get super specific with their semantics — I usually only find <html>, <body>, and <head>, while the rest is filled with <div>. Especially when it comes to companies that push a lot of articles or pages out onto the internet, I can see how they would not care much for being all-inclusive, as it matters more that they get the content out quickly.”
“I did not think I would enjoy coding, but so far, I like this class, and I’m learning so much. I liked getting into CSS a little and making things more customizable. I found it interesting that two elements make your content look the same but have different meanings.”

I want to end things with a few choice quotes from students who completed my course in the last term. I share them not as ego boosters but as a reminder that simplicity is still alive, well, and good on the web. While many new developers feel pressured to earn their “full stack” merit badge, the best way to learn the web — and make people excited about it — is still the simple “a-ha!” moment that happens when someone combines HTML with CSS for the first time in a static file.

“I can confidently say that among all the courses I’ve taken, this is the first one where I thoroughly read all the content and watched all the videos in detail because it is so well-structured. Despite the complexity of the subject, you made this course seem surprisingly easy to grasp.”
“Man, I’ve learned so much in this class this semester, and it’s finally over. This final project has given me more confidence and ability to troubleshoot and achieve my vision.”
“Even though I did not pass, I still truly did enjoy your class. It made me feel smart because coding had felt like an impossible task before.”
“I especially appreciate Geoff’s enthusiasm for multiple reasons. I am hoping to change careers, and the classes are helping me get closer to that reality.”

These are new people entering the field for the first time who are armed with a solid understanding of the basics and a level of curiosity and excitement that easily could clear the height of Mount Elbert.

Isn’t that what we want? What would the web look like if we treat the next wave of web developers like first-class citizens by lowering the barriers to entry and rolling out the red carpet for them to crack into a career in front-end? The web is still a big place, and there is room for everyone to find their own groove. Some things tend to flourish when we democratize them, and many of us experienced that first-hand when we first sat down and wrote HTML for the very first time without the benefit of organized courses, bootcamps, YouTube channels, or frameworks to lean on. The same magic that sparked us is still there to spark others after all this time, even if we fail to see it.

]]>
hello@smashingmagazine.com (Geoff Graham)
<![CDATA[Web Development Is Getting Too Complex, And It May Be Our Fault]]> https://smashingmagazine.com/2024/02/web-development-getting-too-complex/ https://smashingmagazine.com/2024/02/web-development-getting-too-complex/ Wed, 07 Feb 2024 13:00:00 GMT Front-end development seemed simpler in the early 2000s, didn’t it? The standard website consisted mostly of static pages made of HTML and CSS seasoned with a pinch of JavaScript and jQuery. I mean, who doesn’t miss the cross-browser compatibility days, right?

Fast forward to today, and it looks like a parallel universe is taking place with an overwhelming number of choices. Which framework should you use for a new project? Perhaps more established ones like React, Angular, Vue, Svelte, or maybe the hot new one that came out last month? Each framework comes with its unique ecosystem. You also need to decide whether to use TypeScript over vanilla JavaScript and choose how to approach server-side rendering (or static site generation) with meta-frameworks like Next, Nuxt, or Gatsby. And we can’t forget about unit and end-to-end testing if you want a bug-free web app. And we’ve barely scratched the surface of the front-end ecosystem!

But has it really gotten more complex to build websites? A lot of the frameworks and tooling we reach for today were originally crafted for massive projects. As a newcomer, it can be frightening to have so many to consider, almost creating a fear of missing out that we see exploited to sell courses and tutorials on the new hot framework that you “cannot work without.”

All this gives the impression that web development has gotten perhaps too complex. But maybe that is just an exaggeration? In this article, I want to explore those claims and find out if web development really is that complex and, most importantly, how we can prevent it from getting even more difficult than we already perceive it to be.

How It Was Before

As someone who got into web development after 2010, I can’t testify to my own experience about how web development was from the late 1990s through the 2000s. However, even fifteen years ago, learning front-end development was infinitely simpler, at least to me. You could get a website started with static HTML pages, minimal CSS for styling, and a sprinkle of JavaScript (and perhaps a touch of jQuery) to add interactive features, from toggled sidebars to image carousels and other patterns. Not much else was expected from your average developer beyond that — everything else was considered “going the extra mile.” Of course, the awesome native CSS and JavaScript features we have today weren’t around back then, but they were also unnecessary for what was considered best practice in past years.

Large and dynamic web apps certainly existed back then — YouTube and Facebook, to name a couple — but they were developed by massive companies. No one was expected to re-create that sort of project on their own or even a small team. That would’ve been the exception rather than the norm.

I remember back then, tend to worry more about things like SEO and page optimization than how my IDE was configured, but only to the point of adding meta tags and keywords because best practices didn’t include minifying all your assets, three shaking your code, caching your site on edge CDNs, or rendering your content on the server (a problem created by modern frameworks along hydration). Other factors like accessibility, user experience, and responsive layouts were also largely overlooked in comparison to today’s standards. Now, they are deeply analyzed and used to boost Lighthouse scores and impress search engine algorithms.

The web and everything around it changed as more capabilities were added and more and more people grew to depend on it. We have created new solutions, new tools, new workflows, new features, and whatever else new that is needed to cater to a bigger web with even bigger needs.

The web has always had its problems in the past that were worthy of fixing: I absolutely don’t miss tables and float layouts, along with messy DOM manipulation. This post isn’t meant to throw shade on new advances while waxing nostalgic about the good days of the “old wild web.” At the same time, though, yesterday’s problems seem infinitely simpler than those we face today.

JavaScript Frameworks

JavaScript frameworks, like Angular and React, were created by Google and Facebook, respectively, to be used in their own projects and satisfy the needs that only huge web-based companies like them have. Therein lies the main problem with web complexity: JavaScript frameworks were originally created to sustain giant projects rather than smaller ones. Many developers vastly underestimate the amount of time it takes to build a codebase that is reliable and maintainable with a JavaScript framework. However, the alternative of using vanilla JavaScript was worse, and jQuery was short for the task. Vanilla JavaScript was also unable to evolve quickly enough to match our development needs, which changed from simple informative websites to dynamic apps. So, many of us have quickly adopted frameworks to avoid directly mingling with JavaScript and its messy DOM manipulation.

Back-end development is a completely different topic, subject to its own complexities. I only want to focus on front-end development because that is the discipline that has perhaps overstepped its boundaries the most by bleeding into traditional back-end concerns.

Stacks Getting Bigger

It was only logical for JavaScript frameworks to grow in size over time. The web is a big place, and no one framework can cover everything. But they try, and the complexity, in turn, increases. A framework’s size seems to have a one-to-one correlation with its complexity.

But the core framework is just one piece of a web app. Several other technologies make up what’s known as a tech “stack,” and with the web gaining more users and frameworks catering to their needs, tech stacks are getting bigger and bigger. You may have seen popular stacks such as MEAN (MongoDB, Express, Angular, and Node) or its React (MERN) and Vue (MEVN) variants. These stacks are marketed as mature, test-proofed foundations suitable for any front-end project. That means the advertised size of a core framework is grossly underestimated because they rely on other micro-frameworks to ensure highly reliable architectures, as you can see in stackshare.io. Besides, there isn’t a one-size-fits-all stack; the best tool has always depended — and will continue to depend — on the needs and goals of your particular project.

This means that each new project likely requires a unique architecture to fulfill its requirements. Giant tech companies need colossal architectures across all their projects, and their stacks are highly engineered accordingly to secure scalability and maintenance. They also have massive customer bases, so maintaining a large codebase will be easier with more revenue, more engineers, and a clearer picture of the problem. To minimize waste, the tech stacks of smaller companies and projects can and should be minimized not only to match the scale of their needs but to the abilities of the developers on the team as well.

The idea that web development is getting too complex comes from buying into the belief that we all have the same needs and resources as giant enterprises.

Trying to imitate their mega stacks is pointless. Some might argue that it’s a sacrifice we have to make for future scalability and maintenance, but we should focus first on building great sites for the user without worrying about features users might need in the future. If what we are building is worth pursuing, it will reach the point where we need those giant architectures in good time. Cross that bridge when we get there. Otherwise, it’s not unlike wearing Shaquille O’Neal-sized sneakers in hopes of growing into them. They might not even last until then if it happens at all!

We must remember that the end-user experience is the focus at the end of the day, and users neither care about nor know what stack we use in our apps. What they care about is a good-looking, useful website where they can accomplish what they came for, not the technology we use to achieve it. This is how I’ve come to believe that web development is not getting more complex. It’s developers like us who are perpetuating it by buying into solutions for problems that do not need to be solved at a certain scale.

Let me be really clear: I am not saying that today’s web development is all bad. Indeed, we’ve realized a lot of great features, and many of them are thanks to JavaScript frameworks that have pushed for certain features. jQuery had that same influence on JavaScript for many, many years.

We can still create minimum viable products today with minimal resources. No, those might not make people smash the Like button on your social posts, but they meet the requirements, nothing more and nothing less. We want bigger! Faster! Cheaper! But we can’t have all three.

If anything, front-end development has gotten way easier thanks to modern features that solve age-old development issues, like the way CSS Flexbox and Grid have trivialized layouts that used to require complex hacks involving floats and tables. It’s the same deal with JavaScript gaining new ways to build interactions that used to take clever workarounds or obtuse code, such as having the Intersection Observer API to trivialize things like lazy loading (although HTML has gained its own features in that area, too).

We live in this tension between the ease of new platform features and the complexity of our stacks.

Do We Need A JavaScript Framework For Everything?

Each project, regardless of its simplicity, desperately needs a JavaScript framework. A project without a complex framework is like serving caviar on a paper plate.

At least, that’s what everyone seems to think. But is that actually true? I’d argue on the contrary. JavaScript frameworks are best used on bigger applications. If you’re working on a smaller project, a component-based framework will only complicate matters, making you split your website into a component hierarchy that amounts to overkill for small projects.

The idea of needing a framework for everything has been massively oversold. Maybe not directly, but you unconsciously get that feeling whenever a framework’s name pops in, as Edge engineer Alex Russell eloquently expresses in his article, “The Market For Lemons”:

“These technologies were initially pitched on the back of “better user experiences” but have utterly failed to deliver on that promise outside of the high-management-maturity organisations in which they were born. Transplanted into the wider web, these new stacks have proven to be expensive duds.”

— Alex Russell

Remember, the purpose of a framework is to simplify your life and save time. If the project you’re working on is smaller, the time you supposedly save is likely overshadowed by the time you spend either setting up the framework or making it work with the rest of the project. A framework can help make bigger web apps more interactive and dynamic, but there are times when a framework is a heavy-handed solution that actually breeds inefficient workflows and introduces technical debt.

Step back and think about this: Are HTML, CSS, and a touch of JavaScript enough to build your website or web application? If so, then stick with those. What I am afraid of is adding complexity for complexity’s sake and inadvertently raising the barrier to entry for those coming into web development. We can still accomplish so much with HTML and CSS alone, thanks again to many advances in the last decade. But we give the impression that they are unsuitable for today’s web consumption and need to be enhanced.

Knowing Everything And Nothing At The Same Time

The perceived standard that teams must adopt framework-centered architectures puts a burden not only on the project itself but on a developer’s well-being, too. As mentioned earlier, most teams are unable to afford those architectures and only have a few developers to maintain them. If we undermine what can be achieved with HTML and CSS alone and set the expectations that any project — regardless of size — needs to have a bleeding edge stack, then the weight to meet those expectations falls on the developer’s shoulders, with the great responsibility of being proficient in all areas, from the server and database to front end, to design, to accessibility, to performance, to testing, and it doesn’t stop. It’s what has been driving “The Great Divide” in front-end development, which Chris Coyier explains like this:

“The divide is between people who self-identify as a (or have the job title of) front-end developer yet have divergent skill sets. On one side, an army of developers whose interests, responsibilities, and skillsets are heavily revolved around JavaScript. On the other, an army of developers whose interests, responsibilities, and skillsets are focused on other areas of the front end, like HTML, CSS, design, interaction, patterns, accessibility, and so on.”

— Chris Coyier

Under these expectations, developers who focus more on HTML, CSS, design, and accessibility rather than the latest technology will feel less valued in an industry that appears to praise those who are concerned with the stack. What exactly are we saying when we start dividing responsibilities in terms of “full-stack development” or absurd terms like “10x development”? A while back, Brad Frost began distinguishing these divisions as “front-of-the-front-end” and “back-of-the-front-end”.

Mandy Michael explains what impact the chase for “full-stack” has had on developers trying to keep up:

“The worst part about pushing the “know everything” mentality is that we end up creating an industry full of professionals suffering from burnout and mental illness. We have people speaking at conferences about well-being, imposter syndrome, and full-stack anxiety, yet despite that, we perpetuate this idea that people have to know everything and be amazing at it.”

— Mandy Michael

This isn’t the only symptom of adopting heavy-handed solutions for what “vanilla” HTML, CSS, and JavaScript already handle nicely. As the expectations for what we can do as front-end developers grow, the learning curve of front-end development grows as well. Again, we can’t learn and know everything in this vast discipline. But we tell ourselves we have to, and thanks to this mentality, it’s unfortunately common to witness developers who may be extremely proficient with a particular framework but actually know and understand little of the web platform itself, like HTML semantics and structure.

The fact that many budding developers tend to jump straight into frameworks at the expense of understanding the basics of HTML and CSS isn’t a new worry, as Rachel Andrew discussed back in 2019:

“That’s the real entry point here, and yes, in 2019, they are going to have to move on quickly to the tools and techniques that will make them employable, if that is their aim. However, those tools output HTML and CSS in the end. It is the bedrock of everything that we do, which makes the devaluing of those with real deep skills in those areas so much more baffling.”

— Rachel Andrew

And I want to clarify yet again that modern Javascript frameworks and libraries aren’t inherently bad; they just aren’t designed to replace the web platform and its standards. But we keep pushing them like we want them to!

The Consequences Of Vendor Lock-In

“Vendor lock-in” happens when we depend too deeply on proprietary products and services to the extent that switching to other products and services becomes a nearly impossible task. This often occurs when cloud services from a particular company are deeply integrated into a project. It’s an issue, especially in cloud computing, since moving databases once they are set up is expensive and lengthy.

Vendor lock-in in web development has traditionally been restricted to the back end, like with cloud services such as AWS or Firebase; the front-end framework, meanwhile, was a completely separate concern. That said, I have noticed a recent trend where vendor lock-in is reaching into meta-frameworks, too. With the companies behind certain meta-frameworks offering hosting services for their own products, swapping hosts is increasingly harder to do (whether the lock-in is designed intentionally or not). Of course, companies and developers will be more likely to choose the hosting service of the company that made a particular framework used on their projects — they’re the experts! — but that only increases the project’s dependency on those vendors and their services.

A clear example is the relationship between Next and Vercel, the parent cloud service for Next. With the launch of Next 13, it has become increasingly harder to set up a Next project outside of Vercel, leading to projects like Open Next, which says right on its website that “[w]hile Vercel is great, it’s not a good option if all your infrastructure is on AWS. Hosting it in your AWS account makes it easy to integrate with your backend [sic]. And it’s a lot cheaper than Vercel.” Fortunately, the developers’ concerns have been heard, and Next 14 brings clarity on how to self-host Next on a Node server.

Another example is Gatsby and Gatsby Cloud. Gatsby has always offered helpful guides and alternative hosting recommendations, but since the launch of Gatsby Cloud in 2019, the main framework has been optimized so that using Gatsby and Gatsby Cloud together requires no additional hosting configurations. That’s fantastic if you adopt both, but it’s not so great if all you need is one or the other because integrating the framework with other hosts — and vice versa — is simply harder. It’s as if you are penalized for exercising choice.

And let’s not forget that no team expected Netlify to acquire Gatsby Cloud in February 2023. This is a prime case where the vendor lock-in problem hits everybody because converting from one site to another comes at a cost. Some teams were charged 120% more after converting from Gatsby Cloud to Netlify — even with the same plan they had with Gatsby Cloud!

What’s the solution? The common answer I hear is to stop using paid cloud services in favor of open-sourced alternatives. While that’s great and indeed a viable option for some projects, it fails to consider that an open-source project may not meet the requirements needed for a given app.

And even then, open-source software depends on the community of developers that maintain and update the codebase with little to no remuneration in exchange. Further, open source is equally prone to locking you into certain solutions that are designed to solve a deficiency with the software.

There are frameworks and libraries, of course, that are in no danger of being abandoned. React is a great example because it has an actively engaged community behind it. But you can’t have the same assurance with each new dependency you add to a project. We can’t simply keep installing more packages and components each time we spot a weak spot in the dependency chain, especially when a project is perfectly suited for a less complex architecture that properly leverages the web platform.

Choosing technology for your stack is an exercise of picking your own poison. Either choose a paid service and be subject to vendor lock-in in the future, or choose an open-source one and pray that the community continues to maintain it.

Those are virtually the only two choices. Many of the teams I know or have worked on depend on third-party services because they cannot afford to develop them on their own; that’s a luxury that only massive companies can afford. It’s a problem we have to undergo when starting a new project, but one we can minimize by reducing the number of dependencies and choosing wisely when we have to.

Each Solution Introduces A New Problem

Why exactly have modern development stacks gotten so large and complex? We can point a finger at the “Development Paradox.” With each new framework or library, a new problem crops up, and time-starved developers spend months developing a new tool to solve that problem. And when there isn’t a problem, don’t worry — we will create one eventually. This is a feedback loop that creates amazing solutions and technologies but can lead to over-engineered websites if we don’t reign it in.

This reminds me of the famous quote:

“The plain fact is that if you don’t have a problem, you create one. If you don’t have a problem, you don’t feel that you are living.”

— U.G. Krishnamurti

Let’s look specifically at React. It was originally created by Facebook for Facebook to develop more dynamic features for users while improving Facebook’s developer experience.

Since React was open-sourced in 2013 (and nearly re-licensed in 2017, if it weren’t for the WordPress community), hundreds of new utilities have been created to address various React-specific problems. How do you start a React project? There’s Create React App and Vite. Do you need to enhance your state management? There is Redux, among other options. Need help creating forms? There is a React Hook Form. And perhaps the most important question: Do you need server-side rendering? There’s Next, Remix, or Gatsby for that. Each solution comes with its own caveats, and developers will create their own solutions for them.

It may be unfair to pick on React since it considers itself a library, not a framework. It’s inevitably prone to be extended by the community. Meanwhile, Angular and Vue are frameworks with their own community ecosystems. And this is the tip of the iceberg since there are many JavaScript frameworks in the wild, each with its own distinct ideology and dependencies.

Again, I don’t want you to get the wrong idea. I love that new technologies emerge and find it liberating to have so many options. But when building something as straightforward as a webpage or small website — which some have started referring to as “multi-page applications” — we have to draw a line that defines how many new technologies we use and how reliable they are. We’re quite literally mashing together third-party code written by various third-party developers. What could go wrong? Please don’t answer that.

Remember that our users don’t care what’s in our stacks. They only see the final product, so we can save ourselves from working on unnecessary architectures that aren’t appreciated outside of development circles. It may seem counterintuitive in the face of advancing technology, but knowing that the user doesn’t care about what goes behind the scenes and only sees the final product will significantly enhance our developer experience and free you from locked dependencies. Why fix something that isn’t broken?

How Can We Simplify Our Codebases?

We’ve covered several reasons why web development appears to be more complex today than in years past, but blaming developers for releasing new utilities isn’t an accurate portrayal of the real problem. After all, when developing a site, it’s not like we are forced to use each new technology that enters the market. In fact, many of us are often unaware of a particular library and only learn about it when developing a new feature. For example, if we want to add toast notifications to our web app, we will look for a library like react-toastify rather than some other way of building them because it “goes with” that specific library. It’s worth asking whether the app needs toast notifications at all if they introduce new dependencies.

Imagine you are developing an app that allows users to discover, review, and rate restaurants in their area. The app needs, at a bare minimum, information about each restaurant, a search tool to query them, and an account registration flow with authentication to securely access the account. It’s easy to make assumptions about what a future user might need in addition to these critical features. In many cases, a project ends up delayed because we add unnecessary features like SSR, notifications, offline mode, and fancy animations — sometimes before the app has even converted its first registered user!

I believe we can boil down the complexity problem to personal wishes and perceived needs rather than properly scoping a project based on user needs and experiences.

That level of scope creep can easily turn into an over-engineered product that will likely never see the light of launching.

What can we do to simplify our own projects? The following advice is relevant when you have control over your project, either because it’s a personal one, it’s a smaller one for a smaller team, or you have control over the decisions in whatever size organization you happen to be in.

The hardest and most important step is having a sense of detection when your codebase is getting unnecessarily complicated. I deem it the hardest step because there is no certainty of what the requirements are or what the user needs; we can only make assumptions. Some are obvious, like assuming the user will need a way to log into the app. Others might be unclear, like whether the app should have private messaging between users. Others are still far-fetched, like believing users need extremely low latency in an e-commerce page. Other features are in the “nice to have” territory.

That is regarding the user experience, but the same questions emerge on the development side:

  • Should we be using a CSS preprocessor or a CSS framework, or can we achieve it using only CSS modules?
  • Is vanilla JavaScript enough, or are we going to add TypeScript?
  • Does the app need SSR, SSG, or a hybrid of the two?
  • Should we implement Redis on the back end for faster database queries, or is that too much scope for the work?
  • Should we be implementing end-to-end testing or unit tests?

These are valid questions that should be considered when developing a site, but they can distract us from our main focus: getting things done.

“Done is better than perfect.”

— Sheryl Sandberg

And, hey, even the largest and most sophisticated apps began as minimal offerings that iterated along the way.

We also ought to be asking ourselves what would happen if a particular feature or dependency isn’t added to the project. If the answer is “nothing,” then we should be shifting our attention to something else.

Another question worth asking: “Why are we choosing to add [X]?” Is it because that’s what is popular at the moment, or because it solves a problem affecting a core feature? Another aspect to take into consideration is how familiar we are with certain technologies and give preference to those we know and can start using them right away rather than having to stop and learn the ins and outs of a new framework.

Choose the right tool for the job, which is going to be the one that meets the requirements and fits your mental model. Focus less on a library’s popularity and scalability but rather on getting your app to the point where it needs to scale in the first place.

Conclusion

It’s incredibly difficult to not over-engineer web apps given current one-size-fits-all and fear-of-missing-out mentalities. But we can be more conscious of our project goals and exercise vigilance in guarding our work against scope creep. The same can be applied to the stack we use, making choices based on what is really needed rather than focusing purely on what everyone else is using for their particular work.

After reading the word “framework” exactly 48 times in this article, can we now say the web is getting too complex? It has been complex by nature since its origins, but complexity doesn’t translate to “over-engineered” web apps. The web isn’t intrinsically over-engineered, and we only have ourselves to blame for over-engineering our projects with overly-wrought solutions for perceived needs.

]]>
hello@smashingmagazine.com (Juan Diego Rodríguez)
<![CDATA[A Guide To Designing For Older Adults]]> https://smashingmagazine.com/2024/02/guide-designing-older-adults/ https://smashingmagazine.com/2024/02/guide-designing-older-adults/ Tue, 06 Feb 2024 08:00:00 GMT Smart Interface Design Patterns.]]> Today, one billion people are 60 years or older. That’s 12% of the entire world population, and the age group is growing faster than any other group. Yet, online, the needs of older adults are often overlooked or omitted. So what do we need to consider to make our designs more inclusive for older adults? Well, let’s take a closer look.

This article is part of our ongoing series on design patterns. It’s also a part of the video library on Smart Interface Design Patterns 🍣 and is available in the live UX training as well.

Make Users Feel Independent And Competent

When designing for older adults, we shouldn’t make our design decisions based on stereotypes or assumptions that are often not true at all. Don’t assume that older adults struggle to use digital. Most users are healthy, active, and have a solid income.

They might use the web differently than younger users, but that doesn’t mean we need to design a “barebones” version for them. What we need is a reliable, inclusive digital experience that helps everyone feel independent and competent.

Good accessibility is good for everyone. To make it happen, we need to bring older adults into our design process and find out what their needs are. This doesn’t only benefit the older audience but improves the overall UX — for everyone.

One Task At A Time and Error Messages

When designing for older users, keep in mind that there are significant differences in age groups 60–65, 65–70, 70–75, and so on, so explore design decisions for each group individually.

Older adults often read and analyze every word (so-called Stroop effect), so give them enough time to achieve a task, as well as control the process. So avoid disappearing messages so that users can close them themselves when they are ready or present only 1 question at a time in a form.

Older adults also often struggle with precise movements, so avoid long, fine drag gestures and precision. If a user performs an action, they didn’t mean to and runs into an error, be sure your error messages are helpful and forgiving, as older adults often view error messages as a personal failure.

As Peter Sylwester has suggested, sensory reaction times peak at about the age of 24 and then degrade slowly as we age. Most humans maintain fine motor skills and decent reaction times well into old age. Therefore, error messages and small updates and prompts should almost always be a consideration. One good way to facilitate reaction time is to keep errors and prompts close to the center of attention.

As always, when it comes to accessibility, watch out for contrast. Particularly, shades of blue/purple and yellow/green are often difficult to distinguish. When using icons, it is also a good idea to add descriptive labels to ensure everyone can make sense of them, no matter their vision.

Guidelines For Designing For Older Adults
  • Avoid disappearing messages: let users close them.
  • Avoid long, fine drag gestures and precision.
  • Avoid floating labels and use static field labels instead.
  • Don’t rely on icons alone: add descriptive labels.
  • Ask for explicit confirmation for destructive actions.
  • Add a “Back” link in addition to the browser’s “Back” button.
  • In forms, present one question or one topic per screen.
  • Use sufficient contrast (particularly shades of blue/purple and yellow/green can be hard to distinguish).
  • Make error messages helpful and forgiving.
Wrapping Up

We should be careful not to make our design decisions based on assumptions that are often not true at all. We don’t need a “barebones” version for older users. We need a reliable, inclusive product that helps people of all groups feel independent and competent.

Bring older adults in your design process to find out what their specific needs are. It’s not just better for that specific target audience — good accessibility is better for everyone. And huge kudos to wonderful people contributing to a topic that is often forgotten and overlooked.

Useful Resources

Meet Smart Interface Design Patterns

If you are interested in similar insights around UX, take a look at Smart Interface Design Patterns, our 10h-video course with 100s of practical examples from real-life projects — with a live UX training starting March 8. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

100 design patterns & real-life examples.
10h-video course + live UX training. Free preview.

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces]]> https://smashingmagazine.com/2024/02/designing-ai-beyond-conversational-interfaces/ https://smashingmagazine.com/2024/02/designing-ai-beyond-conversational-interfaces/ Fri, 02 Feb 2024 13:00:00 GMT Few technological innovations can completely change the way we interact with computers. Lucky for us, it seems we’ve won front-row seats to the unfolding of the next paradigm shift.

These shifts tend to unlock a new abstraction layer to hide the working details of a subsystem. Generalizing details allows our complex systems to appear simpler & more intuitive. This streamlines coding programs for computers as well as designing the interfaces to interact with them.

The Command Line Interface, for instance, created an abstraction layer to enable interaction through a stored program. This hid the subsystem details once exposed in earlier computers that were only programmable by inputting 1s & 0s through switches.

Graphical User Interfaces (GUI) further abstracted this notion by allowing us to manipulate computers through visual metaphors. These abstractions made computers accessible to a mainstream of non-technical users.

Despite these advances, we still haven’t found a perfectly intuitive interface — the troves of support articles across the web make that evident. Yet recent advances in AI have convinced many technologists that the next evolutionary cycle of computing is upon us.

Layers of interface abstraction, bottom to top: Command Line Interfaces, Graphical User Interfaces, & AI-powered Conversational Interfaces. (Source: Maximillian Piras) (Large preview) The Next Layer Of Interface Abstraction

A branch of machine learning called generative AI drives the bulk of recent innovation. It leverages pattern recognition in datasets to establish probabilistic distributions that enable novel constructions of text, media, & code. Bill Gates believes it’s “the most important advance in technology since the graphical user interface” because it can make controlling computers even easier. A newfound ability to interpret unstructured data, such as natural language, unlocks new inputs & outputs to enable novel form factors.

Now our universe of information can be instantly invoked through an interface as intuitive as talking to another human. These are the computers we’ve dreamed of in science fiction, akin to systems like Data from Star Trek. Perhaps computers up to this point were only prototypes & we’re now getting to the actual product launch. Imagine if building the internet was laying down the tracks, AIs could be the trains to transport all of our information at breakneck speed & we’re about to see what happens when they barrel into town.

“Soon the pre-AI period will seem as distant as the days when using a computer meant typing at a C:> prompt rather than tapping on a screen.”

— Bill Gates in “The Age of AI Has Begun

If everything is about to change, so must the mental models of software designers. As Luke Wroblewski once popularized mobile-first design, the next zeitgeist is likely AI-first. Only through understanding AI’s constraints & capabilities can we craft delight. Its influence on the discourse of interface evolution has already begun.

Large Language Models (LLMs), for instance, are a type of AI utilized in many new applications & their text-based nature leads many to believe a conversational interface, such as a chatbot, is a fitting form for the future. The notion that AI is something you talk to has been permeating across the industry for years. Robb Wilson, the co-owner of UX Magazine, calls conversation “the infinitely scalable interface” in his book The Age of Invisible Machines (2022). Noah Levin, Figma’s VP of Product Design, contends that “it’s a very intuitive thing to learn how to talk to something.” Even a herald of GUIs such as Bill Gates posits that “our main way of controlling a computer will no longer be pointing and clicking.”

Microsoft Copilot is a new conversational AI feature being integrated across their office suite. (Source: Microsoft) (Large preview)

The hope is that conversational computers will flatten learning curves. Jesse Lyu, the founder of Rabbit, asserts that a natural language approach will be “so intuitive that you don’t even need to learn how to use it.”

After all, it’s not as if Data from Stark Trek came with an instruction manual or onboarding tutorial. From this perspective, the evolutionary tale of conversational interfaces superseding GUIs seems logical & echoes the earlier shift away from command lines. But others have opposing opinions, some going as far as Maggie Appleton to call conversational interfaces like chatbots “the lazy solution.”

This might seem like a schism at first, but it’s more so a symptom of a simplistic framing of interface evolution. Command lines are far from extinct; technical users still prefer them for their greater flexibility & efficiency. For use cases like software development or automation scripting, the added abstraction layer in graphical no-code tools can act as a barrier rather than a bridge.

GUIs were revolutionary but not a panacea. Yet there is ample research to suggest conversational interfaces won’t be one, either. For certain interactions, they can decrease usability, increase cost, & introduce security risk relative to GUIs.

So, what is the right interface for artificially intelligent applications? This article aims to inform that design decision by contrasting the capabilities & constraints of conversation as an interface.

Connecting The Pixels

We’ll begin with some historical context, as the key to knowing the future often starts with looking at the past. Conversational interfaces feel new, but we’ve been able to chat with computers for decades.

Joseph Weizenbaum invented the first chatbot, ELIZA, during an MIT experiment in 1966. This laid the foundation for the following generations of language models to come, from voice assistants like Alexa to those annoying phone tree menus. Yet the majority of chatbots were seldom put to use beyond basic tasks like setting timers.

It seemed most consumers weren’t that excited to converse with computers after all. But something changed last year. Somehow we went from CNET reporting that “72% of people found chatbots to be a waste of time” to ChatGPT gaining 100 million weekly active users.

What took chatbots from arid to astonishing? Most assign credit to OpenAI’s 2018 invention (PDF) of the Generative Pre-trained Transformer (GPT). These are a new type of LLM with significant improvements in natural language understanding. Yet, at the core of a GPT is the earlier innovation of the transformer architecture introduced in 2017 (PDF). This architecture enabled the parallel processing required to capture long-term context around natural language inputs. Diving deeper, this architecture is only possible thanks to the attention mechanism introduced in 2014 (PDF). This enabled the selective weighing of an input’s different parts.

Through this assemblage of complementary innovations, conversational interfaces now seem to be capable of competing with GUIs on a wider range of tasks. It took a surprisingly similar path to unlock GUIs as a viable alternative to command lines. Of course, it required hardware like a mouse to capture user signals beyond keystrokes & screens of adequate resolution. However, researchers found the missing software ingredient years later with the invention of bitmaps.

Bitmaps allowed for complex pixel patterns that earlier vector displays struggled with. Ivan Sutherland’s Sketchpad, for instance, was the inaugural GUI but couldn’t support concepts like overlapping windows. IEEE Spectrum’s Of Mice and Menus (1989) details the progress that led to the bitmap’s invention by Alan Kay’s group at Xerox Parc. This new technology enabled the revolutionary WIMP (windows, icons menus, and pointers)) paradigm that helped onboard an entire generation to personal computers through intuitive visual metaphors.

Computing no longer required a preconceived set of steps at the outset. It may seem trivial in hindsight, but the presenters were already alluding to an artificially intelligent system during Sketchpad’s MIT demo in 1963. This was an inflection point transforming an elaborate calculating machine into an exploratory tool. Designers could now craft interfaces for experiences where a need to discover eclipsed the need for flexibility & efficiency offered by command lines.

Parallel Paradigms

Novel adjustments to existing technology made each new interface viable for mainstream usage — the cherry on top of a sundae, if you will. In both cases, the foundational systems were already available, but a different data processing decision made the output meaningful enough to attract a mainstream audience beyond technologists.

With bitmaps, GUIs can organize pixels into a grid sequence to create complex skeuomorphic structures. With GPTs, conversational interfaces can organize unstructured datasets to create responses with human-like (or greater) intelligence.

The prototypical interfaces of both paradigms were invented in the 1960s, then saw a massive delta in their development timelines — a case study unto itself. Now we find ourselves at another inflection point: in addition to calculating machines & exploratory tools, computers can act as life-like entities.

But which of our needs call for conversational interfaces over graphical ones? We see a theoretical solution to our need for companionship in the movie Her, where the protagonist falls in love with his digital assistant. But what is the benefit to those of us who are content with our organic relationships? We can look forward to validating the assumption that conversation is a more intuitive interface. It seems plausible because a few core components of the WIMP paradigm have well-documented usability issues.

Nielsen Norman Group reports that cultural differences make universal recognition of icons rare — menus trend towards an unusable mess with the inevitable addition of complexity over time. Conversational interfaces appear more usable because you can just tell the system when you’re confused! But as we’ll see in the next sections, they have their fair share of usability issues as well.

By replacing menus with input fields, we must wonder if we’re trading one set of usability problems for another.

The Cost of Conversation

Why are conversational interfaces so popular in science fiction movies? In a Rhizome essay, Martine Syms theorizes that they make “for more cinematic interaction and a leaner production.” This same cost/benefit applies to app development as well. Text completion delivered via written or spoken word is the core capability of an LLM. This makes conversation the simplest package for this capability from a design & engineering perspective.

Linus Lee, a prominent AI Research Engineer, characterizes it as “exposing the algorithm’s raw interface.” Since the interaction pattern & components are already largely defined, there isn’t much more to invent — everything can get thrown into a chat window.

“If you’re an engineer or designer tasked with harnessing the power of these models into a software interface, the easiest and most natural way to “wrap” this capability into a UI would be a conversational interface”

— Linus Lee in Imagining Better Interfaces to Language Models

This is further validated by The Atlantic’s reporting on ChatGPT’s launch as a “low-key research preview.” OpenAI’s hesitance to frame it as a product suggests a lack of confidence in the user experience. The internal expectation was so low that employees’ highest guess on first-week adoption was 100,000 users (90% shy of the actual number).

Conversational interfaces are cheap to build, so they’re a logical starting point, but you get what you pay for. If the interface doesn’t fit the use case, downstream UX debt can outweigh any upfront savings.

Forgotten Usability Principles

Steve Jobs once said, “People don’t know what they want until you show it to them.” Applying this thinking to interfaces echoes a usability evaluation called discoverability. Nielsen Norman Group defines it as a user’s ability to “encounter new content or functionality that they were not aware of.”

A well-designed interface should help users discover what features exist. The interfaces of many popular generative AI applications today revolve around an input field in which a user can type in anything to prompt the system. The problem is that it’s often unclear what a user should type in to get ideal output. Ironically, a theoretical solution to writer’s block may have a blank page problem itself.

“I think AI has a problem with these missing user interfaces, where, for the most part, they just give you a blank box to type in, and then it’s up to you to figure out what it might be able to do.”

— Casey Newton on Hard Fork Podcast

Conversational interfaces excel at mimicking human-to-human interaction but can fall short elsewhere. A popular image generator named Midjourney, for instance, only supported text input at first but is now moving towards a GUI for “greater ease of use.”

This is a good reminder that as we venture into this new frontier, we cannot forget classic human-centered principles like those in Don Norman’s seminal book The Design of Everyday Things (1988). Graphical components still seem better aligned with his advice of providing explicit affordances & signifiers to increase discoverability.

There is also Jakob Nielsen’s list of 10 usability heuristics; many of today’s conversational interfaces seem to ignore every one of them. Consider the first usability heuristic explaining how visibility of system status educates users about the consequences of their actions. It uses a metaphorical map’s “You Are Here” pin to explain how proper orientation informs our next steps.

Navigation is more relevant to conversational interfaces like chatbots than it might seem, even though all interactions take place in the same chat window. The backend of products like ChatGPT will navigate across a neural network to craft each response by focusing attention on a different part of their training datasets.

Putting a pin on the proverbial map of their parametric knowledge isn’t trivial. LLMs are so opaque that even OpenAI admits they “do not understand how they work.” Yet, it is possible to tailor inputs in a way that loosely guides a model to craft a response from different areas of its knowledge.

One popular technique for guiding attention is role-playing. You can ask an LLM to assume a role, such as by inputting “imagine you’re a historian,” to effectively switch its mode. The Prompt Engineering Institute explains that when “training on a large corpus of text data from diverse domains, the model forms a complex understanding of various roles and the language associated with them.” Assuming a role invokes associated aspects in an AI’s training data, such as tone, skills, & rationality.

For instance, a historian role responds with factual details whereas a storyteller role responds with narrative descriptions. Roles can also improve task efficiency through tooling, such as by assigning a data scientist role to generate responses with Python code.

Roles also reinforce social norms, as Jason Yuan remarks on how “your banking AI agent probably shouldn’t be able to have a deep philosophical chat with you.” Yet conversational interfaces will bury this type of system status in their message history, forcing us to keep it in our working memory.

A theoretical AI chatbot that uses a segmented controller to let users specify a role in one click — each button automatically adjusts the LLM’s system prompt. (Source: Maximillian Piras) (Large preview)

The lack of persistent signifiers for context, like roleplay, can lead to usability issues. For clarity, we must constantly ask the AI’s status, similar to typing ls & cd commands into a terminal. Experts can manage it, but the added cognitive load is likely to weigh on novices. The problem goes beyond human memory, systems suffer from a similar cognitive overload. Due to data limits in their context windows, a user must eventually reinstate any roleplay below the system level. If this type of information persisted in the interface, it would be clear to users & could be automatically reiterated to the AI in each prompt.

Character.ai achieves this by using historical figures as familiar focal points. Cultural cues lead us to ask different types of questions to “Al Pacino” than we would “Socrates.” A “character” becomes a heuristic to set user expectations & automatically adjust system settings. It’s like posting up a restaurant menu; visitors no longer need to ask what there is to eat & they can just order instead.

“Humans have limited short-term memories. Interfaces that promote recognition reduce the amount of cognitive effort required from users.”

— Jakob Nielsen in “10 Usability Heuristics for User Interface Design

Another forgotten usability lesson is that some tasks are easier to do than to explain, especially through the direct manipulation style of interaction popularized in GUIs.

Photoshop’s new generative AI features reinforce this notion by integrating with their graphical interface. While Generative Fill includes an input field, it also relies on skeuomorphic controls like their classic lasso tool. Describing which part of an image to manipulate is much more cumbersome than clicking it.

Interactions should remain outside of an input field when words are less efficient. Sliders seem like a better fit for sizing, as saying “make it bigger” leaves too much room for subjectivity. Settings like colors & aspect ratios are easier to select than describe. Standardized controls can also let systems better organize prompts behind the scenes. If a model accepts specific values for a parameter, for instance, the interface can provide a natural mapping for how it should be input.

Most of these usability principles are over three decades old now, which may lead some to wonder if they’re still relevant. Jakob Nielsen recently remarked on the longevity of their relevance, suggesting that “when something has remained true for 26 years, it will likely apply to future generations of user interfaces as well.” However, honoring these usability principles doesn’t require adhering to classic components. Apps like Krea are already exploring new GUI to manipulate generative AI.

Prompt Engineering Is Engineering

The biggest usability problem with today’s conversational interfaces is that they offload technical work to non-technical users. In addition to low discoverability, another similarity they share with command lines is that ideal output is only attainable through learned commands. We refer to the practice of tailoring inputs to best communicate with generative AI systems as “prompt engineering”. The name itself suggests it’s an expert activity, along with the fact that becoming proficient in it can lead to a $200k salary.

Programming with natural language is a fascinating advancement but seems misplaced as a requirement in consumer applications. Just because anyone can now speak the same language as a computer doesn’t mean they know what to say or the best way to say it — we need to guide them. While all new technologies have learning curves, this one feels steep enough to hinder further adoption & long-term retention.

Prompt engineering as a prerequisite for high-quality output seems to have taken on the mystique of a dark art. Many marketing materials for AI features reinforce this through terms like “magic.” If we assume there is a positive feedback loop at play, this opaqueness must be an inspiring consumer intrigue.

But positioning products in the realm of spellbooks & shamans also suggests an indecipherable experience — is this a good long-term strategy? If we assume Steve Krug’s influential lessons from Don’t Make Me Think (2000) still apply, then most people won’t bother to study proper prompting & instead will muddle through.

But the problem with trial & error in generative AI is that there aren’t any error states; you’ll always get a response. For instance, if you ask an LLM to do the math, it will provide you with confident answers that may be completely wrong. So it becomes harder to learn from errors when we are unaware if a response is a hallucination. As OpenAI’s Andrej Karpathy suggests, hallucinations are not necessarily a bug because LLMs are “dream machines,” so it all depends on how interfaces set user expectations.

“But as with people, finding the most meaningful answer from AI involves asking the right questions. AI is neither psychic nor telepathic.”

— Stephen J. Bigelow in 5 Skills Needed to Become a Prompt Engineer

Using magical language risks leading novices to the magical thinking that AI is omniscient. It may not be obvious that its knowledge is limited to the training data.

Once the magic dust fades away, software designers will realize that these decisions are the user experience!

Crafting delight comes from selecting the right prompting techniques, knowledge sourcing, & model selection for the job to be done. We should be exploring how to offload this work from our users.

  • Empty states could explain the limits of an AI’s knowledge & allow users to fill gaps as needed.
  • Onboarding flows could learn user goals to recommend relevant models tuned with the right reasoning.
  • An equivalent to fuzzy search could markup user inputs to educate them on useful adjustments.

We’ve begun to see a hint of this with OpenAI’s image generator rewriting a user’s input behind the scenes to optimize for better image output.

Lamborghini Pizza Delivery

Aside from the cognitive cost of usability issues, there is a monetary cost to consider as well. Every interaction with a conversational interface invokes an AI to reason through a response. This requires a lot more computing power than clicking a button within a GUI. At the current cost of computing, this expense can be prohibitive. There are some tasks where the value from added intelligence may not be worth the price.

For example, the Wall Street Journal suggests using an LLM for tasks like email summarization is “like getting a Lamborghini to deliver a pizza.” Higher costs are, in part, due to the inability of AI systems to leverage economies of scale in the way standard software does. Each interaction requires intense calculation, so costs scale linearly with usage. Without a zero-marginal cost of reproduction, the common software subscription model becomes less tenable.

Will consumers pay higher prices for conversational interfaces or prefer AI capabilities wrapped in cost-effective GUI? Ironically, this predicament is reminiscent of the early struggles GUIs faced. The processor logic & memory speed needed to power the underlying bitmaps only became tenable when the price of RAM chips dropped years later. Let’s hope history repeats itself.

Another cost to consider is the security risk: what if your Lamborghini gets stolen during the pizza delivery? If you let people ask AI anything, some of those questions will be manipulative. Prompt injections are attempts to infiltrate systems through natural language. The right sequence of words can turn an input field into an attack vector, allowing malicious actors to access private information & integrations.

So be cautious when positioning AI as a member of the team since employees are already regarded as the weakest link in cyber security defense. The wrong business logic could accidentally optimize the number of phishing emails your organization falls victim to.

Good design can mitigate these costs by identifying where AI is most meaningful to users. Emphasize human-like conversational interactions at these moments but use more cost-effective elements elsewhere. Protect against prompt injections by partitioning sensitive data so it’s only accessible by secure systems. We know LLMs aren’t great at math anyway, so free them up for creative collaboration instead of managing boring billing details.

Generations Are Predictions

In my previous Smashing article, I explained the concept of algorithm-friendly interfaces. They view every interaction as an opportunity to improve understanding through bidirectional feedback. They provide system feedback to users while reporting performance feedback to the system. Their success is a function of maximizing data collection touchpoints to optimize predictions. Accuracy gains in predictive output tend to result in better user retention. So good data compounds in value by reinforcing itself through network effects.

While my previous focus was on content recommendation algorithms, could we apply this to generative AI? While the output is very different, they’re both predictive models. We can customize these predictions with specific data like the characteristics, preferences, & behavior of an individual user.

So, just as Spotify learns your musical taste to recommend new songs, we could theoretically personalize generative AI. Midjourney could recommend image generation parameters based on past usage or preferences. ChatGPT could invoke the right roles at the right time (hopefully with system status visibility).

This territory is still somewhat uncharted, so it’s unclear how algorithm-friendly conversational interfaces are. The same discoverability issues affecting their usability may also affect their ability to analyze engagement signals. An inability to separate signal from noise will weaken personalization efforts. Consider a simple interaction like tapping a “like” button; it sends a very clean signal to the backend.

What is the conversational equivalent of this? Inputting the word “like” doesn’t seem like as reliable a signal because it may be mentioned in a simile or mindless affectation. Based on the insights from my previous article, the value of successful personalization suggests that any regression will be acutely felt in your company’s pocketbook.

Perhaps a solution is using another LLM as a reasoning engine to format unstructured inputs automatically into clear engagement signals. But until their data collection efficiency is clear, designers should ask if the benefits of a conversational interface outweigh the risk of worse personalization.

Towards The Next Layer Of Abstraction

As this new paradigm shift in computing evolves, I hope this is a helpful primer for thinking about the next interface abstractions. Conversational interfaces will surely be a mainstay in the next era of AI-first design. Adding voice capabilities will allow computers to augment our abilities without arching our spines through unhealthy amounts of screen time. Yet conversation alone won’t suffice, as we also must design for needs that words cannot describe.

So, if no interface is a panacea, let’s avoid simplistic evolutionary tales & instead aspire towards the principles of great experiences. We want an interface that is integrated, contextual, & multimodal. It knows sometimes we can only describe our intent with gestures or diagrams. It respects when we’re too busy for a conversation but need to ask a quick question. When we do want to chat, it can see what we see, so we aren’t burdened with writing lengthy descriptions. When words fail us, it still gets the gist.

Avoiding Tunnel Visions Of The Future

This moment reminds me of a cautionary tale from the days of mobile-first design. A couple of years after the iPhone’s debut, touchscreens became a popular motif in collective visions of the future. But Bret Victor, the revered Human-Interface Inventor (his title at Apple), saw touchscreens more as a tunnel vision of the future.

In his brief rant on peripheral possibilities, he remarks how they ironically ignore touch altogether. Most of the interactions mainly engage our sense of sight instead of the rich capabilities our hands have for haptic feedback. How can we ensure that AI-first design amplifies all our capabilities?

“A tool addresses human needs by amplifying human capabilities.”

— Bret Victor in “A Brief Rant on the Future of Interaction Design”

I wish I could leave you with a clever-sounding formula for when to use conversational interfaces. Perhaps some observable law stating that the mathematical relationship expressed by D∝1/G elucidates that ‘D’, representing describability, exhibits an inverse correlation with ‘G’, denoting graphical utility — therefore, as the complexity it takes to describe something increases, a conversational interface’s usability diminishes. While this observation may be true, it’s not very useful.

Honestly, my uncertainty at this moment humbles me too much to prognosticate on new design principles. What I can do instead is take a lesson from the recently departed Charlie Munger & invert the problem.

Designing Backwards

If we try to design the next abstraction layer looking forward, we seem to end up with something like a chatbot. We now know why this is an incomplete solution on its own. What if we look at the problem backward to identify the undesirable outcomes that we want to avoid? Avoiding stupidity is easier than seeking brilliance, after all.

An obvious mistake to steer clear of is forcing users to engage in conversations without considering time constraints. When the time is right to chat, it should be in a manner that doesn’t replace existing usability problems with equally frustrating new ones. For basic tasks of equivalent importance to delivering pizza, we should find practical solutions not of equivalent extravagance to driving a Lamborghini. Furthermore, we ought not to impose prompt engineering expertise as a requirement for non-expert users. Lastly, as systems become more human-like, they should not inherit our gullibility, lest our efforts inadvertently optimize for exponentially easier access to our private data.

A more intelligent interface won’t make those stupid mistakes.

Thanks to Michael Sands, Evan Miller, & Colin Cowley for providing feedback on early drafts of this article.

]]>
hello@smashingmagazine.com (Maximillian Piras)