Category Archives: General

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Anselm Hannemann

2018-04-13T14:30:19+02:00
2018-04-13T12:49:49+00:00

These days, it is one of the biggest challenges to think long-term. In a world where we live with devices that last only a few months or a few years maybe, where we buy stuff to throw it away only days or weeks later, the term ‘effort’ gains a new meaning.

Recently, I was reading an essay on ‘Yatnah’, ‘Effort’. I spent a lot of time outside in nature in the past weeks and created a small acre to grow some vegetables. I also attended a workshop to learn the craft of grafting fruit trees. When you cut a tree, you realize that our fast-living, short-term lifestyle is very different from how nature works. I grafted a tree that is supposed to grow for decades now, and if you cut a tree that has been there for forty years, it’ll take another forty to grow one that will be similarly tall.

I’d love that we all try to create more long-lasting work, software that works in a decade, and in order to do so, put effort into learning how we can make that happen. So long, I’ll leave you with this quote and a bunch of interesting articles.

“In our modern world it can be tempting to throw effort away and replace it with a few phrases of positive thinking. But there is just no substitute for practice”.

— Kino Macgregor

News

  • The Safari Technology Preview 52 removes support for all NPAPI plug-ins other than Adobe Flash and adds support for preconnect link headers.
  • Chrome 66 Beta brings the CSS Typed Object Model, Async Clipboard API, AudioWorklets, and support to use calc(), min(), and max() in Media Queries. Additionally, select and textarea fields now support the autocomplete attribute, and the catch clause of a try statement can be used without a parameter from now on.
  • iOS 11.3 is available to the public now, and, as already announced, the release brings support for Progressive Web Apps to iOS. Maximiliano Firtman shares what this means, what works and what doesn’t work (yet).
  • Safari 11.1 is now available for everyone. Here is a summary of all the new WebKit features it includes.
Progressive Web App on iOS
Progressive Web Apps for iOS are here. Full screen, offline capable, and even visible in the iPad’s dock. (Image credit)

General

  • Anil Dash reflects on what the web was intended to be and how today’s web differs from this: “At a time when millions are losing trust in the web’s biggest sites, it’s worth revisiting the idea that the web was supposed to be made out of countless little sites. Here’s a look at the neglected technologies that were supposed to make it possible.”
  • Morten Rand-Hendriksen wrote about using ethics in web design and what questions we should ask ourselves when suggesting a solution, creating a design, or a new feature. Especially when we think we’re making something ‘smart’, it’s important to put the question whether it actually helps people first.
  • A lot of protest and discussion came along with the Facebook / Cambridge Analytica affair, most of them pointing out the technological problems with Facebook’s permission model. But the crux lies in how Facebook designed their company and which ethical baseline they set. If we don’t want something like this to happen again, it’s upon us to design the service we want.
  • Brendan Dawes shares why he thinks URLs are a masterpiece and a user experience by themselves.
  • Charlie Owen’s talk transcription of “Dear Developer, The Web Isn’t About You” is a good summary of why we as developers need to think beyond what’s good for us and consider what serves the users and how we can achieve that instead.

UI/UX

  • B. Kaan Kavuştuk shares his thoughts about why we won’t be able to build a perfect design or codebase on the first try, no matter how much experience we have. Instead, it’s the constant small improvements that pave the way to perfection.
  • Trine Falbe introduces us to Ethical Design with a practical getting-started guide. It shows alternatives and things to think about when building a business or product. It doesn’t matter much if you’re the owner, a developer, a designer or a sales person, this is about serving users and setting the ground for real and sustainable trust.
  • Josh Lovejoy shares his learnings from working on inclusive tech solutions and why it takes more than good intention to create fair, inclusive technology. This article goes into depth of why human judgment is very difficult and often based on bias, and why it isn’t easy to design and develop algorithms that treat different people equally because of this.
  • The HSB (Hue, Saturation, Brightness) color system isn’t especially new, but a lot of people still don’t understand its advantages. Erik D. Kennedy explains its principles and advantages step-by-step.
  • While there’s more discussion about inclusive design these days, it’s often seen under the accessibility hat or as technical decisions. Robert del Prado now shares how important inclusive design thinking is and why it’s much more about the generic user than some specific people with specific disabilities. Inclusive design brings people together, regardless of who they are, where they live, and what they can afford. And isn’t it the goal of every product to be successful by acquiring as many people as possible? Maybe we need to discuss this with marketing people as well.
  • Anton Lovchikov shares ways to improve optical adjustments in components. It’s an interesting study on how very small changes can make quite a difference.
Fair Is Not The Default
Afraid or angry? Which emotion we think the baby is showing depends on whether we think it’s a girl or a boy. Josh Lovejoy explains how personal bias and judgments like this one lead to unfair products. (Image credit)

Tooling

  • Brian Schrader found an unknown feature in Git which is very helpful to test ideas quickly: Git Notes lets us add, remove, or read notes attached to objects, without touching the objects themselves and without needing to commit the current state.
  • For many projects, I prefer to use npm scripts over calling gulp or direct webpack tasks. Michael Kühnel shares some useful tricks for npm scripts, including how to allow CLI option parameters or how to watch tasks and alert notices on error.
  • Anton Sten explains why new tools don’t always equal productivity. We all love new design tools, and new ones such as Sketch, Figma, Xd, or Invision Studio keep popping up. But despite these tools solving a lot of common problems and making some things easier, productivity is mostly about what works for your problem and not what is newest. If you need to create a static mockup and Photoshop is what you know best, why not use it?
  • There’s a new, fast DNS service available by Cloudflare. Finally, a better alternative to the much used Google DNS servers, it is available under 1.1.1.1. The new DNS is the fastest, and probably one of the most secure ones, too, out there. Cloudflare put a lot of effort into encrypting the service and partnering up with Mozilla to make DNS over HTTPS work to close a big privacy gap that until now leaked all your browsing data to the DNS provider.
  • I heard a lot about iOS machine learning already, but despite the interesting fact that they’re able to do this on the device without sending everything to a cloud, I haven’t found out how to make use of this for apps, yet. Luckily, Manu Rink put together a nice guide in which she explains machine learning in iOS for beginners.
  • There’s great news for the Git GUI fans: Tower now offers a new beta version that includes pull request support, interactive rebase workflows, quick actions, reflog, and search. An amazing update that makes working with the software much faster than before, and even for me as a command line lover it’s a nice option.
Machine Learning In iOS For The Noob
Manu Rink shows how machine learning in iOS works by building an offline handwritten text recognition. (Image credit)

Security

Web Performance

Accessibility

CSS

  • Amber Wilson shares some insights into what it feels like to be thrown into a complex project in order to do the styling there. She rightly says that “nobody said CSS is easy” and expresses how important it is that we as developers face inconvenient situations in order to grow our knowledge.
  • Ana Tudor is known for her special CSS skills. Now she explores and describes how we can achieve scooped corners in CSS with some clever tricks.
Scooped Corners
Scooped corners? Ana Tudor shows how to do it. (Image credit)

JavaScript

  • WebKit got an upgrade for the Clipboard API, and the team gives some very interesting insights into how it works and how Safari will handle some of the common challenges with clipboard data (e.g. images).
  • If you work with key value stores that live only in the frontend, IDB-Keyval is a great lightweight library that simplifies working with IndexedDB and localStorage.
  • Ever wanted to create graphics from your data with a hand-drawn, sketchy look on a website? Rough.js lets you do just that. It’s usually Canvas-based (for better performance and less data) but can also draw SVG paths.
  • If you need a drag-and-drop reorder module, there’s a smooth and accessible solution available now: dragon-drop.
  • For many years, we could only get CSS values in their computed value and even that wasn’t flexible or nice to work with. But now CSS has a proper object-based API for working with values in JavaScript: the CSS Typed Object Model. It’s only available in the upcoming Chrome 66 yet but definitely a promising feature I’d love to use in my code soon.
  • The React.js documentation now has an extra section that explains how to easily and programmatically manage focus states to ensure your UI is accessible.
  • James Milner shares how we can use abortable fetch to cancel requests.
  • There are a few articles about Web Push Notifications out there already, but Oleksii Rudenko’s getting-started guide is a great primer that explains the principles very well.
  • In the past years, we got a lot of new features on the JavaScript platform. And since it’s hard to remember all the new stuff, Raja Rao DV summed up “Everything new in ECMAScript 2016, 2017, and 2018”.

Work & Life

  • To raise awareness for how common such situations are for all of us, James Bennett shares an embarrassing situation where he made a simple mistake that took him a long time to find out. It’s not just me making mistakes, it’s not just you, and not just James — all of us make mistakes, and as embarrassing as they seem to be in that particular situation, there’s nothing to feel bad about.
  • Adam Blanchard says “People are machines. We need maintenance, too.” and creates a comparison for engineers to understand why we need to take care of ourselves and also why we need people who take care of us. This is an insight into what People Engineers do, and why it’s so important for companies to hire such people to ensure a team is healthy.
  • If there’s one thing we don’t talk much about in the web industry, it’s retirement. Jan Chipchase now wrote a lot of interesting thoughts all about retirement.
  • Rebecca Downes shares some insights into her PhD on remote teams, revealing under which circumstances remote teams are great and under which they’re not.
What would people engineers do
People need maintenance, too. That’s where the People Engineer comes in. (Image credit)

Going Beyond…

We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, May 18th. Stay tuned.

Smashing Editorial(cm)


Source: Smashing Magazine

New Course: A Practical Approach to Working With Vue.js and APIs

Vue.js is an increasingly popular open-source JavaScript framework for building user interfaces. Learn how to use Vue and integrate it with an API in our new short course, A Practical Approach to Working With Vue.js and APIs.

What You’ll Learn

In this course, you’ll make practical use of the Vue.js framework. In five lessons, you’ll learn to use an API to pull in data that you will then display using Vue.js.

Screenshot from the Vuejs course

You’ll be working with an open-source API service called the “Open Movie Database”, pulling in data to display movie posters and allow users to search for their favourite movies.

Watch the Introduction

 

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+. 

Plus you now get unlimited downloads from the huge Envato Elements library of 490,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.


Source: Webdesign Tuts+

Automating Your Feature Testing With Selenium WebDriver

Automating Your Feature Testing With Selenium WebDriver

Automating Your Feature Testing With Selenium WebDriver

Nils Schütte

2018-04-12T12:45:54+02:00
2018-04-13T12:49:49+00:00

This article is for web developers who wish to spend less time testing the front end of their web applications but still want to be confident that every feature works fine. It will save you time by automating repetitive online tasks with Selenium WebDriver. You will find a step-by-step example for automating and testing the login function of WordPress, but you can also adapt the example for any other login form.

What Is Selenium And How Can It Help You?

Selenium is a framework for the automated testing of web applications. Using Selenium, you can basically automate every task in your browser as if a real person were to execute the task. The interface used to send commands to the different browsers is called Selenium WebDriver. Implementations of this interface are available for every major browser, including Mozilla Firefox, Google Chrome and Internet Explorer.

Automating Your Feature Testing With Selenium WebDriver

Which type of web developer are you? Are you the disciplined type who tests all key features of your web application after each deployment. If so, you are probably annoyed by how much time this repetitive testing consumes. Or are you the type who just doesn’t bother with testing key features and always thinks, “I should test more, but I’d rather develop new stuff.” If so, you probably only find bugs by chance or when your client or boss complains about them.

I have been working for a well-known online retailer in Germany for quite a while, and I always belonged to the second category: It was so exciting to think of new features for the online shop, and I didn’t like at all going over all of the previous features again after each new software deployment. So, the strategy was more or less to hope that all key features would work.

One day, we had a serious drop in our conversion rate and started digging in our web analytics tools to find the source of this drop. It took quite a while before we found out that our checkout did not work properly since the previous software deployment.

This was the day when I started to do some research about automating our testing process of web applications, and I stumbled upon Selenium and its WebDriver. Selenium is basically a framework that allows you to automate web browsers. WebDriver is the name of the key interface that allows you to send commands to all major browsers (mobile and desktop) and work with them as a real user would.

Preparing The First Test With Selenium WebDriver

First, I was a little skeptical of whether Selenium would suit my needs because the framework is most commonly used in Java, and I am certainly not a Java expert. Later, I learned that being a Java expert is not necessary to take advantage of the power of the Selenium framework.

As a simple first test, I tested the login of one of my WordPress projects. Why WordPress? Just because using the WordPress login form is an example that everybody can follow more easily than if I were to refer to some custom web application.

What do you need to start using Selenium WebDriver? Because I decided to use the most common implementation of Selenium in Java, I needed to set up my little Java environment.

If you want to follow my example, you can use the Java environment of your choice. If you haven’t set one up yet, I suggest installing Eclipse and making sure you are able to run a simple “Hello world” script in Java.

Because I wanted to test the login in Chrome, I made sure that the Chrome browser was already installed on my machine. That’s all I did in preparation.

Downloading The ChromeDriver

All major browsers provide their own implementation of the WebDriver interface. Because I wanted to test the WordPress login in Chrome, I needed to get the WebDriver implementation of Chrome: ChromeDriver.

I extracted the ZIP archive and stored the executable file chromedriver.exe in a location that I could remember for later.

Setting Up Our Selenium Project In Eclipse

The steps I took in Eclipse are probably pretty basic to someone who works a lot with Java and Eclipse. But for those like me, who are not so familiar with this, I will go over the individual steps:

  1. Open Eclipse.
  2. Click the “New” icon.
    Creating a new project in Eclipse
    Creating a new project in Eclipse
  3. Choose the wizard to create a new “Java Project,” and click “Next.”
    Chosing the java-project wizard
    Choose the java-project wizard.
  4. Give your project a name, and click “Finish.”
    Eclipse project wizard
    The eclipse project wizard
  5. Now you should see your new Java project on the left side of the screen.
    Java project successfully created
    We successfully created a project to run the Selenium WebDriver.

Adding The Selenium Library To Our Project

Now we have our Java project, but Selenium is still missing. So, next, we need to bring the Selenium framework into our Java project. Here are the steps I took:

  1. Download the latest version of the Java Selenium library.
    Downloading the Selenium library
    Download the Selenium library.
  2. Extract the archive, and store the folder in a place you can remember easily.
  3. Go back to Eclipse, and go to “Project” → “Properties.”
    Eclipse Properties
    Go to properties to integrate the Selenium WebDriver in you project.
  4. In the dialog, go to “Java Build Path” and then to register “Libraries.”
  5. Click on “Add External JARs.”
    Adding the Selenium lib to your Java build path.
    Add the Selenium lib to your Java build path.
  6. Navigate to the just downloaded folder with the Selenium library. Highlight all .jar files and click “Open.”
    Selecting the correct files of the Selenium lib.
    Select all files of the lib to add to your project.
  7. Repeat this for all .jar files in the subfolder libs as well.
  8. Eventually, you should see all .jar files in the libraries of your project:
    Selenium WebDriver framework successfully integrated into your project
    The Selenium WebDriver framework has now been successfully integrated into your project!

That’s it! Everything we’ve done until now is a one-time task. You could use this project now for all of your different tests, and you wouldn’t need to do the whole setup process for every test case again. Kind of neat, isn’t it?

Creating Our Testing Class And Letting It Open the Chrome Browser

Now we have our Selenium project, but what next? To see whether it works at all, I wanted to try something really simple, like just opening my Chrome browser.

To do this, I needed to create a new Java class from which I could execute my first test case. Into this executable class, I copied a few Java code lines, and believe it or not, it worked! Magically, the Chrome browser opened and, after a few seconds, closed all by itself.

Try it yourself:

  1. Click on the “New” button again (while you are in your new project’s folder).
    New class in eclipse
    Create a new class to run the Selenium WebDriver.
  2. Choose the “Class” wizard, and click “Next.”
    New class wizard in eclipse
    Choose the Java class wizard to create a new class.
  3. Name your class (for example, “RunTest”), and click “Finish.”
    Eclipse Java Class wizard
    The eclipse Java Class wizard.
  4. Replace all code in your new class with the following code. The only thing you need to change is the path to chromedriver.exe on your computer:
    import org.openqa.selenium.WebDriver;
    import org.openqa.selenium.chrome.ChromeDriver;
    
    /**
     * @author Nils Schuette via frontendtest.org
     */
    public class RunTest {
        static WebDriver webDriver;
        /**
         * @param args
         * @throws InterruptedException
         */
        public static void main(final String[] args) throws InterruptedException {
            // Telling the system where to find the chrome driver
            System.setProperty(
                    "webdriver.chrome.driver",
                    "C:/PATH/TO/chromedriver.exe");
    
            // Open the Chrome browser
            webDriver = new ChromeDriver();
    
            // Waiting a bit before closing
            Thread.sleep(7000);
    
            // Closing the browser and WebDriver
            webDriver.close();
            webDriver.quit();
        }
    }
    
  5. Save your file, and click on the play button to run your code.
    Run Eclipse project
    Running your first Selenium WebDriver project.
  6. If you have done everything correctly, the code should open a new instance of the Chrome browser and close it shortly thereafter.
    Chrome Browser blank window
    The Chrome Browser opens itself magically. (Large preview)

Testing The WordPress Admin Login

Now I was optimistic that I could automate my first little feature test. I wanted the browser to navigate to one of my WordPress projects, login to the admin area and verify that the login was successful. So, what commands did I need to look up?

  1. Navigate to the login form,
  2. Locate the input fields,
  3. Type the username and password into the input fields,
  4. Hit the login button,
  5. Compare the current page’s headline to see if the login was successful.

Again, after I had done all the necessary updates to my code and clicked on the run button in Eclipse, my browser started to magically work itself through the WordPress login. I successfully ran my first automated website test!

If you want to try this yourself, replace all of the code of your Java class with the following. I will go through the code in detail afterwards. Before executing the code, you must replace four values with your own:

  1. The location of your chromedriver.exe file (as above),

  2. The URL of the WordPress admin account that you want to test,

  3. The WordPress username,

  4. The WordPress password.

Then, save and let it run again. It will open Chrome, navigate to the login of your WordPress website, login and check whether the h1 headline of the current page is “Dashboard.”

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;

/**
 * @author Nils Schuette via frontendtest.org
 */
public class RunTest {
    static WebDriver webDriver;
    /**
     * @param args
     * @throws InterruptedException
     */
    public static void main(final String[] args) throws InterruptedException {
        // Telling the system where to find the chrome driver
        System.setProperty(
                "webdriver.chrome.driver",
                "C:/PATH/TO/chromedriver.exe");

        // Open the Chrome browser
        webDriver = new ChromeDriver();

        // Maximize the browser window
        webDriver.manage().window().maximize();

        if (testWordpresslogin()) {
            System.out.println("Test WordPress Login: Passed");
        } else {
            System.out.println("Test WordPress Login: Failed");

        }

        // Close the browser and WebDriver
        webDriver.close();
        webDriver.quit();
    }

    private static boolean testWordpresslogin() {
        try {
            // Open google.com
            webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

            // Type in the username
            webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

            // Type in the password
            webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");

            // Click the Submit button
            webDriver.findElement(By.id("wp-submit")).click();

            // Wait a little bit (7000 milliseconds)
            Thread.sleep(7000);

            // Check whether the h1 equals “Dashboard”
            if (webDriver.findElement(By.tagName("h1")).getText()
                    .equals("Dashboard")) {
                return true;
            } else {
                return false;
            }

        // If anything goes wrong, return false.
        } catch (final Exception e) {
            System.out.println(e.getClass().toString());
            return false;
        }
    }
}

If you have done everything correctly, your output in the Eclipse console should look something like this:

Eclipse console test result.
The Eclipse console states that our first test has passed. (Large preview)

Understanding The Code

Because you are probably a web developer and have at least a basic understanding of other programming languages, I am sure you already grasp the basic idea of the code: We have created a separate method, testWordpressLogin, for the specific test case that is called from our main method.

Depending on whether the method returns true or false, you will get an output in your console telling you whether this specific test passed or failed.

This is not necessary, but this way you can easily add many more test cases to this class and still have readable code.

Now, step by step, here is what happens in our little program:

  1. First, we tell our program where it can find the specific WebDriver for Chrome.
    System.setProperty("webdriver.chrome.driver","C:/PATH/TO/chromedriver.exe");
  2. We open the Chrome browser and maximize the browser window.
    webDriver = new ChromeDriver();
    webDriver.manage().window().maximize();
  3. This is where we jump into our submethod and check whether it returns true or false.
    if (testWordpresslogin()) …
  4. The following part in our submethod might not be intuitive to understand:
    The try{…}catch{…} blocks. If everything goes as expected, only the code in try{…} will be executed, but if anything goes wrong while executing try{…}, then the execution continuous in catch{}. Whenever you try to locate an element with findElement and the browser is not able to locate this element, it will throw an exception and execute the code in catch{…}. In my example, the test will be marked as “failed” whenever something goes wrong and the catch{} is executed.
  5. In the submethod, we start by navigating to our WordPress admin area and locating the fields for the username and the password by looking for their IDs. Also, we type the given values in these fields.
    webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");
    webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");
    webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");

    Wordpress login form
    Selenium fills out our login form
  6. After filling in the login form, we locate the submit button by its ID and click it.
    webDriver.findElement(By.id("wp-submit")).click();
  7. In order to follow the test visually, I include a 7-second pause here (7000 milliseconds = 7 seconds).
    Thread.sleep(7000);
  8. If the login is successful, the h1 headline of the current page should now be “Dashboard,” referring to the WordPress admin area. Because the h1 headline should exist only once on every page, I have used the tag name here to locate the element. In most other cases, the tag name is not a good locator because an HTML tag name is rarely unique on a web page. After locating the h1, we extract the text of the element with getText() and check whether it is equal to the string “Dashboard.” If the login is not successful, we would not find “Dashboard” as the current h1. Therefore, I’ve decided to use the h1 to check whether the login is successful.
    if (webDriver.findElement(By.tagName("h1")).getText().equals("Dashboard")) 
        {
            return true;
        } else {
            return false;
        }
    

    Wordpress Dashboard
    Letting the WebDriver check, whether we have arrived on the Dashboard: Test passed! (Large preview)
  9. If anything has gone wrong in the previous part of the submethod, the program would have jumped directly to the following part. The catch block will print the type of exception that happened to the console and afterwards return false to the main method.
    catch (final Exception e) {
                System.out.println(e.getClass().toString());
                return false;
            }

Adapting The Test Case

This is where it gets interesting if you want to adapt and add test cases of your own. You can see that we always call methods of the webDriver object to do something with the Chrome browser.

First, we maximize the window:

webDriver.manage().window().maximize();

Then, in a separate method, we navigate to our WordPress admin area:

webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

There are other methods of the webDriver object we can use. Besides the two above, you will probably use this one a lot:

webDriver.findElement(By. …)

The findElement method helps us find different elements in the DOM. There are different options to find elements:

  • By.id
  • By.cssSelector
  • By.className
  • By.linkText
  • By.name
  • By.xpath

If possible, I recommend using By.id because the ID of an element should always be unique (unlike, for example, the className), and it is usually not affected if the structure of your DOM changes (unlike, say, the xPath).

Note: You can read more about the different options for locating elements with WebDriver over here.

As soon as you get ahold of an element using the findElement method, you can call the different available methods of the element. The most common ones are sendKeys, click and getText.

We’re using sendKeys to fill in the login form:

webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

We have used click to submit the login form by clicking on the submit button:

webDriver.findElement(By.id("wp-submit")).click();

And getText has been used to check what text is in the h1 after the submit button is clicked:

webDriver.findElement(By.tagName("h1")).getText()

Note: Be sure to check out all the available methods that you can use with an element.

Conclusion

Ever since I discovered the power of Selenium WebDriver, my life as a web developer has changed. I simply love it. The deeper I dive into the framework, the more possibilities I discover — running one test simultaneously in Chrome, Internet Explorer and Firefox or even on my smartphone, or taking screenshots automatically of different pages and comparing them. Today, I use Selenium WebDriver not only for testing purposes, but also to automate repetitive tasks on the web. Whenever I see an opportunity to automate my work on the web, I simply copy my initial WebDriver project and adapt it to the next task.

If you think that Selenium WebDriver is for you, I recommend looking at Selenium’s documentation to find out about all of the possibilities of Selenium (such as running tasks simultaneously on several (mobile) devices with Selenium Grid).

I look forward to hearing whether you find WebDriver as useful as I do!

Smashing Editorial(rb, ra, al, il)


Source: Smashing Magazine

Inspiration: 10 Examples of Pure CSS Animation on CodePen

Our mobile browsers continue to get more powerful and better at showing us amazing, beautiful animations. When combined with the layout power of CSS, it’s possible to create some gorgeous work without using any images at all. The result is scalable, quick-loading, and well, impressive to see.

Let’s take a look at what amazing things people are making and animating with only HTML and CSS.

1. Pure CSS Biker

There’s so much going on here it’s hard to believe it’s simply HTML and CSS! Rotating animations and multiple, layered movements combine to make it look like this cyclist and his bike are made of jelly. Nice use of BEM in the class naming too!

2. Pure CSS Saturn Hula Hooping

Combining lots of moving parts can make a simple set of HTML elements seem like a more complex animation, and that’s something this demo does well. See how the two planets interact, while the looping “particles” are just scattered enough to seem random.

3. Color Layers CSS Animation

Simple colored layers might not seem much, but when they move they can convey loads of character. In this example, a set of semi-transparent HTML paragraph tags are animated, and the resulting stacked animation is hypnotic.

4. Ice-Cream Loader

We don’t always need JPGs or PNGs to make beautiful images, and this is another example. One container div and four others is all it takes to make this deliciously watchable little ice-cream-themed loader image. The resulting code is much smaller than an animated GIF would be.

5. Pure CSS Pigeons

When you combine artistic use of HTML elements with some character-filled animation, this is the result. An amazingly smooth yet busy animation full of fun. Massive respect to Julia Muzafarova for what must have been incredibly fiddly work building all those sets of keyframes. More than a few coffees too!

6. Sleeping Cat

Bringing together lots of simple HTML elements with a bunch of subtle, fun animations, this sleepy cat has a lot of character. This example uses Sass, and variables to control the animation. Try changing some to see what happens!

7. Black Bear

Smooth animations can be achieved when using HTML and CSS, especially when we follow some basic principles. This animation keeps the number of elements to a minimum, and great use of transforms.

8. CSS Sponge

Quick animations can add a lot of character when combined. In this demo, see how bubbles and splashes are choreographed together to create cute animation with a happy sponge, all without any images.

9. Pure CSS Checkbox Mail

A series of keyframe animations can tell a story or help people understand what they’re looking at. Here we see open an envelope and get a letter.

10. Care Preloader

A bit of subtle movement can create great feelings of intensity! This loader has a car looking like it’s speeding along, all created with a few elements and CSS animations. With no images this would load fast.

Get Inspired!

Thanks as always to CodePen and the creative minds behind these demos; they’ve certainly provided us with plenty of inspiration in these animation examples. Check out the following posts for more of the same, and to learn how to create your own CSS animations!


Source: Webdesign Tuts+

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Lou Franco

2018-04-11T17:00:44+02:00
2018-04-13T12:49:49+00:00

Since iOS 5, Siri has helped iPhone users send messages, set reminders and look up restaurants with Apple’s apps. Starting in iOS 10, we have been able to use Siri in some of our own apps as well.

In order to use this functionality, your app must fit within Apple’s predefined Siri “domains and intents.” In this article, we’ll learn about what those are and see whether our apps can use them. We’ll take a simple app that is a to-do list manager and learn how to add Siri support. We’ll also go through the Apple developer website’s guidelines on configuration and Swift code for a new type of extension that was introduced with SiriKit: the Intents extension.

When you get to the coding part of this article, you will need Xcode (at least version 9.x), and it would be good if you are familiar with iOS development in Swift because we’re going to add Siri to a small working app. We’ll go through the steps of setting up a extension on Apple’s developer website and of adding the Siri extension code to the app.

“Hey Siri, Why Do I Need You?”

Sometimes I use my phone while on my couch, with both hands free, and I can give the screen my full attention. Maybe I’ll text my sister to plan our mom’s birthday or reply to a question in Trello. I can see the app. I can tap the screen. I can type.

But I might be walking around my town, listening to a podcast, when a text comes in on my watch. My phone is in my pocket, and I can’t easily answer while walking.

With Siri, I can hold down my headphone’s control button and say, “Text my sister that I’ll be there by two o’clock.” Siri is great when you are on the go and can’t give full attention to your phone or when the interaction is minor, but it requires several taps and a bunch of typing.

This is fine if I want to use Apple apps for these interactions. But some categories of apps, like messaging, have very popular alternatives. Other activities, such as booking a ride or reserving a table in a restaurant, are not even possible with Apple’s built-in apps but are perfect for Siri.

Apple’s Approach To Voice Assistants

To enable Siri in third-party apps, Apple had to decide on a mechanism to take the sound from the user’s voice and somehow get it to the app in a way that it could fulfill the request. To make this possible, Apple requires the user to mention the app’s name in the request, but they had several options of what to do with the rest of the request.

  • It could have sent a sound file to the app.
    The benefit of this approach is that the app could try to handle literally any request the user might have for it. Amazon or Google might have liked this approach because they already have sophisticated voice-recognition services. But most apps would not be able to handle this very easily.
  • It could have turned the speech into text and sent that.
    Because many apps don’t have sophisticated natural-language implementations, the user would usually have to stick to very particular phrases, and non-English support would be up to the app developer to implement.
  • It could have asked you to provide a list of phrases that you understand.
    This mechanism is closer to what Amazon does with Alexa (in its “skills” framework), and it enables far more uses of Alexa than SiriKit can currently handle. In an Alexa skill, you provide phrases with placeholder variables that Alexa will fill in for you. For example, “Alexa, remind me at $TIME$ to $REMINDER$” — Alexa will run this phrase against what the user has said and tell you the values for TIME and REMINDER. As with the previous mechanism, the developer needs to do all of the translation, and there isn’t a lot of flexibility if the user says something slightly different.
  • It could define a list of requests with parameters and send the app a structured request.
    This is actually what Apple does, and the benefit is that it can support a variety of languages, and it does all of the work to try to understand all of the ways a user might phrase a request. The big downside is that you can only implement handlers for requests that Apple defines. This is great if you have, for example, a messaging app, but if you have a music-streaming service or a podcast player, you have no way to use SiriKit right now.

Similarly, there are three ways for apps to talk back to the user: with sound, with text that gets converted, or by expressing the kind of thing you want to say and letting the system figure out the exact way to express it. The last solution (which is what Apple does) puts the burden of translation on Apple, but it gives you limited ways to use your own words to describe things.

The kinds of requests you can handle are defined in SiriKit’s domains and intents. An intent is a type of request that a user might make, like texting a contact or finding a photo. Each intent has a list of parameters — for example, texting requires a contact and a message.

A domain is just a group of related intents. Reading a text and sending a text are both in the messaging domain. Booking a ride and getting a location are in the ride-booking domain. There are domains for making VoIP calls, starting workouts, searching for photos and a few more things. SiriKit’s documentation contains a full list of domains and their intents.

A common criticism of Siri is that it seems unable to handle requests as well as Google and Alexa, and that the third-party voice ecosystem enabled by Apple’s competitors is richer.

I agree with those criticisms. If your app doesn’t fit within the current intents, then you can’t use SiriKit, and there’s nothing you can do. Even if your app does fit, you can’t control all of the words Siri says or understands; so, if you have a particular way of talking about things in your app, you can’t always teach that to Siri.

The hope of iOS developers is both that Apple will greatly expand its list of intents and that its natural language processing becomes much better. If it does that, then we will have a voice assistant that works without developers having to do translation or understand all of the ways of saying the same thing. And implementing support for structured requests is actually fairly simple to do — a lot easier than building a natural language parser.

Another big benefit of the intents framework is that it is not limited to Siri and voice requests. Even now, the Maps app can generate an intents-based request of your app (for example, a restaurant reservation). It does this programmatically (not from voice or natural language). If Apple allowed apps to discover each other’s exposed intents, we’d have a much better way for apps to work together, (as opposed to x-callback style URLs).

Finally, because an intent is a structured request with parameters, there is a simple way for an app to express that parameters are missing or that it needs help distinguishing between some options. Siri can then ask follow-up questions to resolve the parameters without the app needing to conduct the conversation.

The Ride-Booking Domain

To understand domains and intents, let’s look at the ride-booking domain. This is the domain that you would use to ask Siri to get you a Lyft car.

Apple defines how to ask for a ride and how to get information about it, but there is actually no built-in Apple app that can actually handle this request. This is one of the few domains where a SiriKit-enabled app is required.

You can invoke one of the intents via voice or directly from Maps. Some of the intents for this domain are:

  • Request a ride
    Use this one to book a ride. You’ll need to provide a pick-up and drop-off location, and the app might also need to know your party’s size and what kind of ride you want. A sample phrase might be, “Book me a ride with <appname>.”
  • Get the ride’s status
    Use this intent to find out whether your request was received and to get information about the vehicle and driver, including their location. The Maps app uses this intent to show an updated image of the car as it is approaching you.
  • Cancel a ride
    Use this to cancel a ride that you have booked.

For any of this intents, Siri might need to know more information. As you’ll see when we implement an intent handler, your Intents extension can tell Siri that a required parameter is missing, and Siri will prompt the user for it.

The fact that intents can be invoked programmatically by Maps shows how intents might enable inter-app communication in the future.

Note: You can get a full list of domains and their intents on Apple’s developer website. There is also a sample Apple app with many domains and intents implemented, including ride-booking.

Adding Lists And Notes Domain Support To Your App

OK, now that we understand the basics of SiriKit, let’s look at how you would go about adding support for Siri in an app that involves a lot of configuration and a class for each intent you want to handle.

The rest of this article consists of the detailed steps to add Siri support to an app. There are five high-level things you need to do:

  1. Prepare to add a new extension to the app by creating provisioning profiles with new entitlements for it on Apple’s developer website.
  2. Configure your app (via its plist) to use the entitlements.
  3. Use Xcode’s template to get started with some sample code.
  4. Add the code to support your Siri intent.
  5. Configure Siri’s vocabulary via plists.

Don’t worry: We’ll go through each of these, explaining extensions and entitlements along the way.

To focus on just the Siri parts, I’ve prepared a simple to-do list manager, List-o-Mat.

An animated GIF showing a demo of List-o-Mat
Making lists in List-o-Mat (Large preview)

You can find the full source of the sample, List-o-Mat, on GitHub.

To create it, all I did was start with the Xcode Master-Detail app template and make both screens into a UITableView. I added a way to add and delete lists and items, and a way to check off items as done. All of the navigation is generated by the template.

To store the data, I used the Codable protocol, (introduced at WWDC 2017), which turns structs into JSON and saves it in a text file in the documents folder.

I’ve deliberately kept the code very simple. If you have any experience with Swift and making view controllers, then you should have no problem with it.

Now we can go through the steps of adding SiriKit support. The high-level steps would be the same for any app and whichever domain and intents you plan to implement. We’ll mostly be dealing with Apple’s developer website, editing plists and writing a bit of Swift.

For List-o-Mat, we’ll focus on the lists and notes domain, which is broadly applicable to things like note-taking apps and to-do lists.

In the lists and notes domain, we have the following intents that would make sense for our app.

  • Get a list of tasks.
  • Add a new task to a list.

Because the interactions with Siri actually happen outside of your app (maybe even when you app is not running), iOS uses an extension to implement this.

The Intents Extension

If you have not worked with extensions, you’ll need to know three main things:

  1. An extension is a separate process. It is delivered inside of your app’s bundle, but it runs completely on its own, with its own sandbox.
  2. Your app and extension can communicate with each other by being in the same app group. The easiest way is via the group’s shared sandbox folders (so, they can read and write to the same files if you put them there).
  3. Extensions require their own app IDs, profiles and entitlements.

To add an extension to your app, start by logging into your developer account and going to the “Certificates, Identifiers, & Profiles” section.

Updating Your Apple Developer App Account Data

In our Apple developer account, the first thing we need to do is create an app group. Go to the “App Groups” section under “Identifiers” and add one.

A screenshot of the Apple developer website dialog for registering an app group
Registering an app group (Large preview)

It must start with group, followed by your usual reverse domain-based identifier. Because it has a prefix, you can use your app’s identifier for the rest.

Then, we need to update our app’s ID to use this group and to enable Siri:

  1. Go to the “App IDs” section and click on your app’s ID;
  2. Click the “Edit” button;
  3. Enable app groups (if not enabled for another extension).
    A screenshot of Apple developer website enabling app groups for an app ID
    Enable app groups (Large preview)
  4. Then configure the app group by clicking the “Edit” button. Choose the app group from before.
    A screenshot of the Apple developer website dialog to set the app group name
    Set the name of the app group (Large preview)
  5. Enable SiriKit.
    A screenshot of SiriKit being enabled
    Enable SiriKit (Large preview)
  6. Click “Done” to save it.

Now, we need to create a new app ID for our extension:

  1. In the same “App IDs” section, add a new app ID. This will be your app’s identifier, with a suffix. Do not use just Intents as a suffix because this name will become your module’s name in Swift and would then conflict with the real Intents.
    A screenshot of the Apple developer screen to create an app ID
    Create an app ID for the Intents extension (Large preview)
  2. Enable this app ID for app groups as well (and set up the group as we did before).

Now, create a development provisioning profile for the Intents extension, and regenerate your app’s provisioning profile. Download and install them as you would normally do.

Now that our profiles are installed, we need to go to Xcode and update the app’s entitlements.

Updating Your App’s Entitlements In Xcode

Back in Xcode, choose your project’s name in the project navigator. Then, choose your app’s main target, and go to the “Capabilities” tab. In there, you will see a switch to turn on Siri support.

A screenshot of Xcode’s entitlements screen showing SiriKit is enabled
Enable SiriKit in your app’s entitlements. (Large preview)

Further down the list, you can turn on app groups and configure it.

A screenshot of Xcode's entitlements screen showing the app group is enabled and configured
Configure the app’s app group (Large preview)

If you have set it up correctly, you’ll see this in your app’s .entitlements file:

A screenshot of the App's plist showing that the entitlements are set
The plist shows the entitlements that you set (Large preview)

Now, we are finally ready to add the Intents extension target to our project.

Adding The Intents Extension

We’re finally ready to add the extension. In Xcode, choose “File” → “New Target.” This sheet will pop up:

A screenshot showing the Intents extension in the New Target dialog in Xcode
Add the Intents extension to your project (Large preview)

Choose “Intents Extension” and click the “Next” button. Fill out the following screen:

A screeenshot from Xcode showing how you configure the Intents extension
Configure the Intents extension (Large preview)

The product name needs to match whatever you made the suffix in the intents app ID on the Apple developer website.

We are choosing not to add an intents UI extension. This isn’t covered in this article, but you could add it later if you need one. Basically, it’s a way to put your own branding and display style into Siri’s visual results.

When you are done, Xcode will create an intents handler class that we can use as a starting part for our Siri implementation.

The Intents Handler: Resolve, Confirm And Handle

Xcode generated a new target that has a starting point for us.

The first thing you have to do is set up this new target to be in the same app group as the app. As before, go to the “Capabilities” tab of the target, and turn on app groups, and configure it with your group name. Remember, apps in the same group have a sandbox that they can use to share files with each other. We need this in order for Siri requests to get to our app.

List-o-Mat has a function that returns the group document folder. We should use it whenever we want to read or write to a shared file.

func documentsFolder() -> URL? {
    return FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: "group.com.app-o-mat.ListOMat")
}

For example, when we save the lists, we use this:

func save(lists: Lists) {
    guard let docsDir = documentsFolder() else {
        fatalError("no docs dir")
    }

    let url = docsDir.appendingPathComponent(fileName, isDirectory: false)

    // Encode lists as JSON and save to url
}

The Intents extension template created a file named IntentHandler.swift, with a class named IntentHandler. It also configured it to be the intents’ entry point in the extension’s plist.

A screenshot from Xcode showing how the IntentHandler is configured as an entry point
The intent extension plist configures IntentHandler as the entry point

In this same plist, you will see a section to declare the intents we support. We’re going to start with the one that allows searching for lists, which is named INSearchForNotebookItemsIntent. Add it to the array under IntentsSupported.

A screenshot in Xcode showing that the extension plist should list the intents it handles
Add the intent’s name to the intents plist (Large preview)

Now, go to IntentHandler.swift and replace its contents with this code:

import Intents

class IntentHandler: INExtension {
    override func handler(for intent: INIntent) -> Any? {
        switch intent {
        case is INSearchForNotebookItemsIntent:
            return SearchItemsIntentHandler()
        default:
            return nil
        }
    }
}

The handler function is called to get an object to handle a specific intent. You can just implement all of the protocols in this class and return self, but we’ll put each intent in its own class to keep it better organized.

Because we intend to have a few different classes, let’s give them a common base class for code that we need to share between them:

class ListOMatIntentsHandler: NSObject {
}

The intents framework requires us to inherit from NSObject. We’ll fill in some methods later.

We start our search implementation with this:

class SearchItemsIntentHandler: ListOMatIntentsHandler,
                                                       INSearchForNotebookItemsIntentHandling {
}

To set an intent handler, we need to implement three basic steps

  1. Resolve the parameters.
    Make sure required parameters are given, and disambiguate any you don’t fully understand.
  2. Confirm that the request is doable.
    This is often optional, but even if you know that each parameter is good, you might still need access to an outside resource or have other requirements.
  3. Handle the request.
    Do the thing that is being requested.

INSearchForNotebookItemsIntent, the first intent we’ll implement, can be used as a task search. The kinds of requests we can handle with this are, “In List-o-Mat, show the grocery store list” or “In List-o-Mat, show the store list.”

Aside: “List-o-Mat” is actually a bad name for a SiriKit app because Siri has a hard time with hyphens in apps. Luckily, SiriKit allows us to have alternate names and to provide pronunciation. In the app’s Info.plist, add this section:

A screenshot from Xcode showing that the app plist can add alternate app names and pronunciations
Add alternate app name’s and pronunciation guides to the app plist

This allows the user to say “list oh mat” and for that to be understood as a single word (without hyphens). It doesn’t look ideal on the screen, but without it, Siri sometimes thinks “List” and “Mat” are separate words and gets very confused.

Resolve: Figuring Out The Parameters

For a search for notebook items, there are several parameters:

  1. the item type (a task, a task list, or a note),
  2. the title of the item,
  3. the content of the item,
  4. the completion status (whether the task is marked done or not),
  5. the location it is associated with,
  6. the date it is associated with.

We require only the first two, so we’ll need to write resolve functions for them. INSearchForNotebookItemsIntent has methods for us to implement.

Because we only care about showing task lists, we’ll hardcode that into the resolve for item type. In SearchItemsIntentHandler, add this:

func resolveItemType(for intent: INSearchForNotebookItemsIntent,
                         with completion: @escaping (INNotebookItemTypeResolutionResult) -> Void) {

    completion(.success(with: .taskList))
}

So, no matter what the user says, we’ll be searching for task lists. If we wanted to expand our search support, we’d let Siri try to figure this out from the original phrase and then just use completion(.needsValue()) if the item type was missing. Alternatively, we could try to guess from the title by seeing what matches it. In this case, we would complete with success when Siri knows what it is, and we would use completion(.notRequired()) when we are going to try multiple possibilities.

Title resolution is a little trickier. What we want is for Siri to use a list if it finds one with an exact match for what you said. If it’s unsure or if there is more than one possibility, then we want Siri to ask us for help in figuring it out. To do this, SiriKit provides a set of resolution enums that let us express what we want to happen next.

So, if you say “Grocery Store,” then Siri would have an exact match. But if you say “Store,” then Siri would present a menu of matching lists.

We’ll start with this function to give the basic structure:

func resolveTitle(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) {
    guard let title = intent.title else {
        completion(.needsValue())
        return
    }

    let possibleLists = getPossibleLists(for: title)
    completeResolveListName(with: possibleLists, for: title, with: completion)
}

We’ll implement getPossibleLists(for:) and completeResolveListName(with:for:with:) in the ListOMatIntentsHandler base class.

getPossibleLists(for:) needs to try to fuzzy match the title that Siri passes us with the actual list names.

public func getPossibleLists(for listName: INSpeakableString) -> [INSpeakableString] {
    var possibleLists = [INSpeakableString]()
    for l in loadLists() {
        if l.name.lowercased() == listName.spokenPhrase.lowercased() {
            return [INSpeakableString(spokenPhrase: l.name)]
        }
        if l.name.lowercased().contains(listName.spokenPhrase.lowercased()) || listName.spokenPhrase.lowercased() == "all" {
            possibleLists.append(INSpeakableString(spokenPhrase: l.name))
        }
    }
    return possibleLists
}

We loop through all of our lists. If we get an exact match, we’ll return it, and if not, we’ll return an array of possibilities. In this function, we’re simply checking to see whether the word the user said is contained in a list name (so, a pretty simple match). This lets “Grocery” match “Grocery Store.” A more advanced algorithm might try to match based on words that sound the same (for example, with the Soundex algorithm),

completeResolveListName(with:for:with:) is responsible for deciding what to do with this list of possibilities.

public func completeResolveListName(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) {
    switch possibleLists.count {
    case 0:
        completion(.unsupported())
    case 1:
        if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() {
            completion(.success(with: possibleLists[0]))
        } else {
            completion(.confirmationRequired(with: possibleLists[0]))
        }
    default:
        completion(.disambiguation(with: possibleLists))
    }
}

If we got an exact match, we tell Siri that we succeeded. If we got one inexact match, we tell Siri to ask the user if we guessed it right.

If we got multiple matches, then we use completion(.disambiguation(with: possibleLists)) to tell Siri to show a list and let the user pick one.

Now that we know what the request is, we need to look at the whole thing and make sure we can handle it.

Confirm: Check All Of Your Dependencies

In this case, if we have resolved all of the parameters, we can always handle the request. Typical confirm() implementations might check the availability of external services or check authorization levels.

Because confirm() is optional, we could just do nothing, and Siri would assume we could handle any request with resolved parameters. To be explicit, we could use this:

func confirm(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) {
    completion(INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil))
}

This means we can handle anything.

Handle: Do It

The final step is to handle the request.

func handle(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) {
    guard
        let title = intent.title,
        let list = loadLists().filter({ $0.name.lowercased() == title.spokenPhrase.lowercased()}).first
    else {
        completion(INSearchForNotebookItemsIntentResponse(code: .failure, userActivity: nil))
        return
    }

    let response = INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil)
    response.tasks = list.items.map {
        return INTask(title: INSpeakableString(spokenPhrase: $0.name),
                      status: $0.done ? INTaskStatus.completed : INTaskStatus.notCompleted,
                      taskType: INTaskType.notCompletable,
                      spatialEventTrigger: nil,
                      temporalEventTrigger: nil,
                      createdDateComponents: nil,
                      modifiedDateComponents: nil,
                      identifier: "(list.name)t($0.name)")
    }
    completion(response)
}

First, we find the list based on the title. At this point, resolveTitle has already made sure that we’ll get an exact match. But if there’s an issue, we can still return a failure.

When we have a failure, we have the option of passing a user activity. If your app uses Handoff and has a way to handle this exact type of request, then Siri might try deferring to your app to try the request there. It will not do this when we are in a voice-only context (for example, you started with “Hey Siri”), and it doesn’t guarantee that it will do it in other cases, so don’t count on it.

This is now ready to test. Choose the intent extension in the target list in Xcode. But before you run it, edit the scheme.

A screenshot from Xcode showing how to edit a scheme
Edit the scheme of the the intent to add a sample phrase for debugging.

That brings up a way to provide a query directly:

A screenshot from Xcode showing the edit scheme dialog
Add the sample phrase to the Run section of the scheme. (Large preview)

Notice, I am using “ListOMat” because of the hyphens issue mentioned above. Luckily, it’s pronounced the same as my app’s name, so it should not be much of an issue.

Back in the app, I made a “Grocery Store” list and a “Hardware Store” list. If I ask Siri for the “store” list, it will go through the disambiguation path, which looks like this:

An animated GIF showing Siri handling a request to show the Store list
Siri handles the request by asking for clarification. (Large preview)

If you say “Grocery Store,” then you’ll get an exact match, which goes right to the results.

Adding Items Via Siri

Now that we know the basic concepts of resolve, confirm and handle, we can quickly add an intent to add an item to a list.

First, add INAddTasksIntent to the extension’s plist:

A screenshot in XCode showing the new intent being added to the plist
Add the INAddTasksIntent to the extension plist (Large preview)

Then, update our IntentHandler’s handle function.

override func handler(for intent: INIntent) -> Any? {
    switch intent {
    case is INSearchForNotebookItemsIntent:
        return SearchItemsIntentHandler()
    case is INAddTasksIntent:
        return AddItemsIntentHandler()
    default:
        return nil
    }
}

Add a stub for the new class:

class AddItemsIntentHandler: ListOMatIntentsHandler, INAddTasksIntentHandling {
}

Adding an item needs a similar resolve for searching, except with a target task list instead of a title.

func resolveTargetTaskList(for intent: INAddTasksIntent, with completion: @escaping (INTaskListResolutionResult) -> Void) {

    guard let title = intent.targetTaskList?.title else {
        completion(.needsValue())
        return
    }

    let possibleLists = getPossibleLists(for: title)
    completeResolveTaskList(with: possibleLists, for: title, with: completion)
}

completeResolveTaskList is just like completeResolveListName, but with slightly different types (a task list instead of the title of a task list).

public func completeResolveTaskList(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INTaskListResolutionResult) -> Void) {

    let taskLists = possibleLists.map {
        return INTaskList(title: $0, tasks: [], groupName: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil)
    }

    switch possibleLists.count {
    case 0:
        completion(.unsupported())
    case 1:
        if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() {
            completion(.success(with: taskLists[0]))
        } else {
            completion(.confirmationRequired(with: taskLists[0]))
        }
    default:
        completion(.disambiguation(with: taskLists))
    }
}

It has the same disambiguation logic and behaves in exactly the same way. Saying “Store” needs to be disambiguated, and saying “Grocery Store” would be an exact match.

We’ll leave confirm unimplemented and accept the default. For handle, we need to add an item to the list and save it.

func handle(intent: INAddTasksIntent, completion: @escaping (INAddTasksIntentResponse) -> Void) {
    var lists = loadLists()
    guard
        let taskList = intent.targetTaskList,
        let listIndex = lists.index(where: { $0.name.lowercased() == taskList.title.spokenPhrase.lowercased() }),
        let itemNames = intent.taskTitles, itemNames.count > 0
    else {
            completion(INAddTasksIntentResponse(code: .failure, userActivity: nil))
            return
    }

    // Get the list
    var list = lists[listIndex]

    // Add the items
    var addedTasks = [INTask]()
    for item in itemNames {
        list.addItem(name: item.spokenPhrase, at: list.items.count)
        addedTasks.append(INTask(title: item, status: .notCompleted, taskType: .notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil))
    }

    // Save the new list
    lists[listIndex] = list
    save(lists: lists)

    // Respond with the added items
    let response = INAddTasksIntentResponse(code: .success, userActivity: nil)
    response.addedTasks = addedTasks
    completion(response)
}

We get a list of items and a target list. We look up the list and add the items. We also need to prepare a response for Siri to show with the added items and send it to the completion function.

This function can handle a phrase like, “In ListOMat, add apples to the grocery list.” It can also handle a list of items like, “rice, onions and olives.”

A screenshot of the simulator showing Siri adding items to the grocery store list
Siri adds a few items to the grocery store list

Almost Done, Just A Few More Settings

All of this will work in your simulator or local device, but if you want to submit this, you’ll need to add a NSSiriUsageDescription key to your app’s plist, with a string that describes what you are using Siri for. Something like “Your requests about lists will be sent to Siri” is fine.

You should also add a call to:

INPreferences.requestSiriAuthorization { (status) in }

Put this in your main view controller’s viewDidLoad to ask the user for Siri access. This will show the message you configured above and also let the user know that they could be using Siri for this app.

A screenshot of the dialog that a device pops up when you ask for Siri permission
The device will ask for permission if you try to use Siri in the app.

Finally, you’ll need to tell Siri what to tell the user if the user asks what your app can do, by providing some sample phrases:

  1. Create a plist file in your app (not the extension), named AppIntentVocabulary.plist.
  2. Fill out the intents and phrases that you support.
A screenshot of the AppIntentVocabulary.plist showing sample phrases
Add an AppIntentVocabulary.plist to list the sample phrases that will invoke the intent you handle. (Large preview)

There is no way to really know all of the phrases that Siri will use for an intent, but Apple does provide a few samples for each intent in its documentation. The sample phrases for task-list searching show us that Siri can understand “Show me all my notes on <appName>,” but I found other phrases by trial and error (for example, Siri understands what “lists” are too, not just notes).

Summary

As you can see, adding Siri support to an app has a lot of steps, with a lot of configuration. But the code needed to handle the requests was fairly simple.

There are a lot of steps, but each one is small, and you might be familiar with a few of them if you have used extensions before.

Here is what you’ll need to prepare for a new extension on Apple’s developer website:

  1. Make an app ID for an Intents extension.
  2. Make an app group if you don’t already have one.
  3. Use the app group in the app ID for the app and extension.
  4. Add Siri support to the app’s ID.
  5. Regenerate the profiles and download them.

And here are the steps in Xcode for creating Siri’s Intents extension:

  1. Add an Intents extension using the Xcode template.
  2. Update the entitlements of the app and extension to match the profiles (groups and Siri support).
  3. Add your intents to the extension’s plist.

And you’ll need to add code to do the following things:

  1. Use the app group sandbox to communicate between the app and extension.
  2. Add classes to support each intent with resolve, confirm and handle functions.
  3. Update the generated IntentHandler to use those classes.
  4. Ask for Siri access somewhere in your app.

Finally, there are some Siri-specific configuration settings:

  1. Add the Siri support security string to your app’s plist.
  2. Add sample phrases to an AppIntentVocabulary.plist file in your app.
  3. Run the intent target to test; edit the scheme to provide the phrase.

OK, that is a lot, but if your app fits one of Siri’s domains, then users will expect that they can interact with it via voice. And because the competition for voice assistants is so good, we can only expect that WWDC 2018 will bring a bunch more domains and, hopefully, much better Siri.

Further Reading

Smashing Editorial(da, ra, al, il)


Source: Smashing Magazine

Becoming A UX Leader

Becoming A UX Leader

Becoming A UX Leader

Christopher Murphy

2018-04-11T12:50:15+02:00
2018-04-13T12:49:49+00:00

(This is a sponsored article.) In my previous article on Building UX Teams, I explored the rapidly growing need for UX teams as a result of the emergence of design as a wider business driver. As teams grow, so too does a need for leaders to nurture and guide them.

In my final article in this series on user experience design, I’ll explore the different characteristics of an effective UX leader, and provide some practical advice about growing into a leadership role.

I’ve worked in many organizations — both large and small, and in both the private and public sectors — and, from experience, leadership is a rare quality that is far from commonplace. Truly inspiring leaders are few and far between; if you’re fortunate to work with one, make every effort to learn from them.

Managers that have risen up the ranks don’t automatically become great leaders, and perhaps one of the biggest lessons I’ve learned is that truly inspirational leaders — those that inspire passion and commitment — aren’t as commonplace as you’d think.

A UX leader is truly a hybrid, perhaps more so than in many other — more traditional — businesses. A UX leader needs to encompass a wide range of skills:

  • Establishing, driving and articulating a vision;
  • Communicating across different teams, including design, research, writing, engineering, and business (no small undertaking!);
  • Acting as a champion for user-focused design;
  • Mapping design decisions to key performance indicators (KPIs), and vice-versa, so that success can be measured; and
  • Managing a team, ensuring all the team’s members are challenged and motivated.

UX leadership is not unlike being bi-lingual — or, more accurately, multi-lingual — and it’s a skill that requires dexterity so that nothing gets lost in translation.

This hybrid skill set can seem daunting, but — like anything — the attributes of leadership can be learned and developed. In my final article in this series of ten, I’ll explore what defines a leader and focus on the qualities and attributes needed to step up to this important role.

Undertaking A Skills Audit

Every leader is different, and every leader will be informed by the different experiences they have accumulated to date. There are, however, certain qualities and attributes that leaders tend to share in common.

Great leaders demonstrate self-awareness. They tend to have the maturity to have looked themselves in the mirror and identified aspects of their character that they may need to develop if they are to grow as leaders.

Having identified their strengths and weaknesses and pinpointing areas for improvement, they will have an idea of what they know and — equally important — what they don’t know. As Donald Rumsfeld famously put it:

“There are known knowns: there are things we know we know. We also know there are known unknowns: That is to say, we know there are some things we do not know. But there are also unknown unknowns: the things we don’t know we don’t know.”

Rumsfeld might have been talking about unknown unknowns in a conflict scenario, but his insight applies equally to the world of leadership. To grow as a leader, it’s important to widen your knowledge so that it addresses both:

  • The Known Unknowns
    Skills you know that you don’t know, which you can identify through a self-critical skills audit; and
  • The Unknown Unknowns
    Skills you don’t know you don’t know, which you can identify through inviting your peers to review your strengths and weaknesses.

In short, a skills audit will equip you with a roadmap that you can use as a map to plot a path from where you are now to where you want to be.


a map that you can use to plot a path from where you are now to where you want to be

Undertaking a skills audit will enable you to develop a map that you can use to plot a path from where you are now to where you want to be. (Large preview)

To become an effective leader is to embark upon a journey, identifying the gaps in your knowledge and — step by step — addressing these gaps so that you’re prepared for the leadership role ahead.

Identifying The Gaps In Your Knowledge

One way to identify the gaps in your knowledge is to undertake an honest and self-reflective ‘skills audit’ while making an effort to both learn about yourself and learn about the environment you are working within.

To become a UX leader, it’s critical to develop this self-awareness, identifying the knowledge you need to acquire by undertaking both self-assessments and peer assessments. With your gaps in knowledge identified, it’s possible to build a learning pathway to address these gaps.

In the introduction, I touched on a brief list of skills that an effective and well-equipped leader needs to develop. That list is just the tip of a very large iceberg. At the very least, a hybrid UX leader needs to equip themselves by:

  • Developing an awareness of context, expanding beyond the realms of design to encompass a broader business context;
  • Understanding and building relationships with a cross-section of team members;
  • Identifying outcomes and goals, establishing KPIs that will help to deliver these successfully;
  • Managing budgets, both soft and hard; and
  • Planning and mapping time, often across a diversified team.

These are just a handful of skills that an effective UX leader needs to develop. If you’re anything like me, hardly any of this was taught at art school, so you’ll need to learn these skills yourself. This article will help to point you in the right direction. I’ve also provided a list of required reading for reference to ensure you’re well covered.

A 360º Assessment

A 360º degree leadership assessment is a form of feedback for leaders. Drawn from the world of business, but equally applicable to the world of user experience, it is an excellent way to measure your effectiveness and influence as a leader.

Unlike a top-down appraisal, where a leader or manager appraises an employee underneath them in the hierarchy, a 360º assessment involves inviting your colleagues — at your peer level — to appraise you, highlighting your strengths and weaknesses.

This isn’t easy — and can lead to some uncomfortable home truths — but it can prove a critical tool in helping you to identify the qualities you need to work on. You might, for example, consider yourself an excellent listener only to discover that your colleagues feel like this is something you need to work on.

This willingness to put yourself under the spotlight, identify your weaknesses, address these, and develop yourself is one of the defining characteristics of leaders. Great leaders are always learning and they aren’t afraid to admit that fact.

A 360º assessment is a great way to uncover your ‘unknown unknowns’, i.e. the gaps in your knowledge that you aren’t aware of. With these discoveries in hand, it’s possible to build ‘a learning road-map’ that will allow you to develop the skills you need.

Build A Roadmap

With the gaps in your knowledge identified, it’s important to adopt some strategies to address these gaps. Great leaders understand that learning is a lifelong process and to transition into a leadership role will require — inevitably — the acquisition of new skills.

To develop as a leader, it’s important to address your knowledge gaps in a considered and systematic manner. By working back from your skills audit, identify what you need to work on and build a learning programme accordingly.

This will inevitably involve developing an understanding of different domains of knowledge, but that’s the leader’s path. The important thing is to take it step by step and, of course, to take that first step.

We are fortunate now to be working in an age in which we have an abundance of learning materials at our fingertips. We no longer need to enroll in a course at a university to learn; we can put together our own bespoke learning programmes.

We now have so many tools we can use, from paid resources like Skillshare which offers “access to a learning platform for personalized, on-demand learning,” to free resources like FutureLearn which offers the ability to “learn new skills, pursue your interests and advance your career.”

In short, you have everything you need to enhance your career just a click away.

It’s Not Just You

Great leaders understand that it’s not about the effort of individuals, working alone. It’s about the effort of individuals — working collectively. Looking back through the history of innovation, we can see that most (if not all) of the greatest breakthroughs were driven by teams that were motivated by inspirational leaders.

Thomas Edison didn’t invent the lightbulb alone; he had an ‘invention factory’ housed in a series of research laboratories. Similarly, when we consider the development of contemporary graphical user interfaces (GUIs), these emerged from the teamwork of Xerox PARC. The iPod was similarly conceived.

Great leaders understand that it’s not about them as individuals, but it’s about the teams they put together, which they motivate and orchestrate. They have the humility to build and lead teams that deliver towards the greater good.

This — above all — is one of the defining characteristics of a great leader: they prioritize and celebrate the team’s success over and above their own success.

It’s All About Teamwork

Truly great leaders understand the importance that teams play in delivering outcomes and goals. One of the most important roles a leader needs to undertake is to act as a lynchpin that sits at the heart of a team, identifying new and existing team members, nurturing them, and building them into a team that works effectively together.

A forward-thinking leader won’t just focus on the present, but will proactively develop a vision and long-term goals for future success. To deliver upon this vision of future success will involve both identifying potential new team members, but — just as importantly — developing existing team members. This involves opening your eyes to the different aspects of the business environment you occupy, getting to know your team, and identifying team members’ strengths and weaknesses.

As a UX leader, an important role you’ll play is helping others by mentoring and coaching them, ensuring they are equipped with the skills they need to grow. Again, this is where a truly selfless leader will put others first, in the knowledge that the stronger the team, the stronger the outcomes will be.

As a UX leader, you’ll also act as a champion for design within the wider business context. You’ll act as a bridge — and occasionally, a buffer — between the interface of business requirements and design requirements. Your role will be to champion the power of design and sell its benefits, always singing your team’s praises and — occasionally — fighting on their behalf (often without their awareness).

The Art Of Delegation

It’s possible to build a great UX team from the inside by developing existing team members, and an effective leader will use delegation as an effective development tool to enhance their team members’ capabilities.

Delegation isn’t just passing off the tasks you don’t want to do, it’s about empowering the different individuals in a team. A true leader understands this and spends the time required to learn how to delegate effectively.

Delegation is about education and expanding others’ skill sets, and it’s a powerful tool when used correctly. Effective delegation is a skill, one that you’ll need to acquire to step up into a UX leadership role.

When delegating a task to a team member, it’s important to explain to them why you’re delegating the task. As a leader, your role is to provide clear guidance and this involves explaining why you’ve chosen a team member for a task and how they will be supported, developed and rewarded for taking the task on.

This latter point is critical: All too often managers who lack leadership skills use delegation as a means to offload tasks and responsibility, unaware of the power of delegation. This is poor delegation and it’s ineffective leadership, though I imagine, sadly, we have all experienced it! An effective leader understands and strives to delegate effectively by:

  • defining the task, establishing the outcomes and goals;
  • identifying the appropriate individual or team to take the task on;
  • assessing the team member(s) ability and ascertaining any training needs;
  • explaining their reasoning, clearly outlining why they chose the individual or team;
  • stating the required results;
  • agreeing on realistic deadlines; and
  • providing feedback on completion of the task.

When outlined like this, it becomes clear that effective delegation is more than simply passing on a task you’re unwilling to undertake. Instead, it’s a powerful tool that an effective UX leader uses to enable their team members to take ownership of opportunities, whilst growing their skills and experience.

Give Success And Accept Failure

A great leader is selfless: they give credit for any successes to the team; and accept the responsibility for any failures alone.

A true leader gives success to the team, ensuring that — when there’s a win — the team is celebrated for its success. A true leader takes pleasure in celebrating the team’s win. When it comes to failure, however, a true leader steps up and takes responsibility. A mark of a truly great leader is this selflessness.

As a leader, you set the direction and nurture the team, so it stands to reason that, if things go wrong — which they often do — you’re willing to shoulder the responsibility. This understanding — that you need to give success and accept failure — is what separates great leaders from mediocre managers.

Poor managers will seek to ‘deflect the blame,’ looking for anyone but themselves to apportion responsibility to. Inspiring leaders are aware that, at the end of the day, they are responsible for the decisions made and outcomes reached; when things go wrong they accept responsibility.

If you’re to truly inspire others and lead them to achieve great things, it’s important to remember this distinction between managers and leaders. By giving success and accepting failure, you’ll breed intense loyalty in your team.

Lead By Example

Great leaders understand the importance of leading by example, acting as a beacon for others. To truly inspire a team, it helps to connect yourself with that team, and not isolate yourself. Rolling up your sleeves and pitching in, especially when deadlines are pressing, is a great way to demonstrate that you haven’t lost touch with the ‘front line.’

A great leader understands that success is — always — a team effort and that a motivated team will deliver far more than the sum of its parts.

As I’ve noted in my previous articles: If you’re ever the smartest person in a room, find another room. An effective leader has the confidence to surround themselves with other, smarter people.

Leadership isn’t about individual status or being seen to be the most talented. It’s about teamwork and getting the most out of a well-oiled machine of individuals working effectively together.

Get Out Of Your Silo

To lead effectively, it’s important to get out of your silo and to see the world as others do. This means getting to know all of the team, throughout the organization and at every level.

Leaders that isolate themselves — in their often luxurious corner offices — are, in my experience, poor leaders (if, indeed, they can be called leaders at all!). By distancing themselves from the individuals that make up an organization they run the very real risk of losing touch.

To lead, get out of your silo and acquaint yourself with the totality of your team and, if you’re considering a move into leadership, make it your primary task to explore all the facets of the business.

The Pieces Of The Jigsaw

To lead effectively, you need to have an understanding of others and their different skills. In my last article, Building a UX Team, I wrote about the idea of ‘T-shaped’ people — those that have a depth of skill in their field, along with the willingness and ability to collaborate across disciplines. Great leaders tend to be T-shaped, flourishing by seeing things from others’ perspectives.

Every organization — no matter how large or small — is like an elaborate jigsaw that is made up of many different interlocking parts. An effective leader is informed by an understanding of this context, they will have made the effort to see all of the pieces of the jigsaw. As a UX leader, you’ll need to familiarize yourself with a wide range of different teams, including design, research, writing, engineering, and business.

To lead effectively, it’s important to push outside of your comfort zone and learn about these different specialisms. Do so and you will ensure that you can communicate to these different stakeholders. At the risk of mixing metaphors, you will be the glue that holds the jigsaw together.

Sweat The Details

As Charles and Ray Eames put it:

“The details aren’t the details, they make the product.”

Great leaders understand this: they set the bar high and they encourage and motivate the teams they lead to deliver on the details. To lead a team, it’s important to appreciate the need to strive for excellence. Great leaders aren’t happy to accept the status quo, especially if the status quo can be improved upon.

Of course, these qualities can be learned, but many of us — thankfully — have them, innately. Few (if any) of us are happy with second best and, in a field driven by a desire to create delightful and memorable user experiences, we appreciate the importance of details and their place in the grand scheme of things. This is a great foundation on which to build leadership skills.

To thrive as a leader, it’s important to share this focus on the details with others, ensuring they understand and appreciate the role that the details play in the whole. Act as a beacon of excellence: lead by example; never settle for second best; give success and accept failure… and your team will follow you.

In Closing

As user experience design matures as a discipline, and the number of different specializations multiplies, so too does the discipline’s need for leaders, to nurture and grow teams. As a relatively new field of expertise, the opportunities to develop as a UX leader are tremendous.

Leadership is a skill and — like any skill — it can be learned. As I’ve noted throughout this series of articles, one of the best places to learn is to look to other disciplines for guidance, widening the frame of reference. When we consider leadership, this is certainly true.

There is a great deal we can learn from the world of business, and websites like Harvard Business Review (HBR), McKinsey Quarterly, and Fast Company — amongst many, many others — offer us a wealth of insight.

There’s never been a more exciting time to work in User Experience design. UX has the potential to impact on so many facets of life, and the world is crying out for leaders to step up and lead the charge. I’d encourage anyone eager to learn and to grow to undertake a skills audit, take the first step, and embark on the journey into leadership. Leadership is a privilege, rich with rewards, and is something I’d strongly encourage exploring.

Suggested Reading

There are many great publications, offline and online, that will help you on your adventure. I’ve included a few below to start you on your journey.

  • The Harvard Business Review website is an excellent starting point and its guide, HBR’s 10 Must Reads on Leadership, provides an excellent overview on developing leadership qualities.
  • Peter Drucker’s writing on leadership is also well worth reading. Drucker has written many books, one I would strongly recommend is Managing Oneself. It’s a short (but very powerful) read, and I read it at least two or three times a year.
  • If you’re serious about enhancing your leadership credentials, Michael D. Watkins’s The First 90 Days: Proven Strategies for Getting Up to Speed Faster and Smarter, provides a comprehensive insight into transitioning into leadership roles.
  • Finally, HBR’s website — mentioned above — is definitely worth bookmarking. Its business-focused flavor offers designers a different perspective on leadership, which is well worth becoming acquainted with.

This article is part of the UX design series sponsored by Adobe. Adobe XD is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype, and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial(ra, il)


Source: Smashing Magazine

20 Top Education WordPress Themes: To Make School Sites

Should you use WordPress for your school website?

Running a school is complicated; whether it be a kindergarten, a university, or somewhere in between (not to mention online courses, MOOCs, and speciality schools!). WordPress is ideal for school sites because it’s customizable and adaptable. Maybe you run events, you’ll need an event calendar. Have smart students on your campus? You should include a blog for student writing and perspectives. Selling your own webinars online? You’ll need a way to process those payments.

WordPress can do all of that, and starting with a WordPress theme custom-built for education and school sites is a simple shortcut (don’t worry, it’s not cheating!).

Education Websites in the Wild

First, let’s take a look at some standout examples of education websites for inspiration:

edX

edX.org is one of the largest online course providers in the world. Their homepage is simple and actionable, and highlights the credibility of their courses.

William and Mary

A top liberal arts university in the United States (and, okay, my alma Mater), William and Mary’s website is a recent redesign. It ties in their newest “For The Bold” marketing campaign, uses school colors as accents, and has consistent branding throughout the site.

Code Like a Girl

Code Like a Girl runs coding workshops for young women. Their website is stylish, fast-loading, and user-friendly–all “must-haves” for any website, but especially important for a company that works in the web development space.

20 WordPress Themes for Making a School Website

Here’s a list of twenty great WordPress themes to provide a starting point for your website. Categories include: schools for kids, university websites, speciality/niche school sites, and online learning.

For the Kiddos

1. Baby Kids

Baby Kids - Education Primary School For Children
Baby Kids – Education Primary School For Children

This theme is a top-seller with a strong rating. It comes with all the best plugins, including Visual Composer, Revolution Slider, WooCommerce, and WPML Multilingual. You can also hire the makers to install it for you through Envato Studio.

2. Preschool

Preschool - Nurseries Kindergarten and School WordPress Theme
Preschool – Nurseries Kindergarten and School WordPress Theme

Bright colors and bold sections make this website well-suited for preschool and kindergarten schools.

3. Carry Hill School

Carry Hill School - Responsive WordPress Theme
Carry Hill School – Responsive WordPress Theme

The demo site for this WordPress theme is an elementary school, but with its class timetables and eCommerce support, it could also work for more complex sites.

4. Smarty

Smarty - Kindergarten School High school College University WordPress theme
Smarty – Kindergarten, School, High school, College, University WordPress theme

Smarty has demo sites for kindergarten, schools, and universities. It comes with four predefined color skins, so you know you’ll end up with a color scheme that’s designer-approved.

5. Enfant

Enfant - School and Kindergarten WordPress Theme
Enfant – School and Kindergarten WordPress Theme

A pixel-perfect theme that comes with a simple video installation tutorial and premium plugins. Like all of the themes on this list, it’s mobile-friendly and responsive.

6. Krish

Krish  Nursery School Pre School Theme
Krish | Nursery School, Pre School Theme

Krish is made for kids. It’s bright and fun, with more than eleven header types and unlimited page layouts.

High School and University Websites

7. University of Education WordPress Theme

University of Education WordPress Theme - Courses Management WP
University of Education WordPress Theme – Courses Management WP

Built with Bootstrap, this university-level website has teacher profiles, online shops, quizzes, events, and more. The photo gallery on the home page is a great way to showcase the personality of your school.

8. School Time

School Time - Modern Education WordPress Theme
School Time – Modern Education WordPress Theme

With two demo layouts and a drag-and-drop page builder, this WordPress theme is super easy to customize. Add Google Maps, contact forms, and Font Awesome typography.

9. Ed School: Education

Ed School Education Elementary-High School WordPress Theme
Ed School: Education, Elementary-High School WordPress Theme

This is one of my favorites on the list; a simple, stylish theme that rivals the design of a tech or business website. There’s a list of websites using this theme on the product page, so you can check out real-life examples for inspiration.

Specialized & Niche School Types

10. Dance Studio

Dance Studio - WordPress Theme for Dancing Schools  Clubs
Dance Studio – WordPress Theme for Dancing Schools & Clubs

Running a dance website means your website can’t be uncool. This layout is interesting without being too confusing or complicated.

11. Swim School WordPress

Swim School WordPress
Swim School WordPress

A water-themed website for soon-to-be swimmers. Grab some stock photos of pools and swimming kids from Envato Elements to fill your site with imagery, and you’re good to go.

12. Lighthouse

Lighthouse  School for Kids with Special Needs
Lighthouse | School for Kids with Special Needs

A website built for a schools or kindergartens for children with special needs. It’s compatible with ThemeREX Donations plugin and has a powerful, fully responsive framework.

13. Melody

Melody - Music School WordPress Theme
Melody – Music School WordPress Theme

Need to book private music lessons, offer group classes, and add audio to your site? If you’re making a music school website, check out this theme, Melody.

14. Driveme

Driveme - Driving School WordPress Theme
Driveme – Driving School WordPress Theme

This WordPress theme is highly rated for its support team. It was a winner in the category of “Best Education & Learning Theme” for the Envato Most Wanted contest.

Online Learning Platforms

15. Skilled

Skilled  School Education Courses WordPress Theme
Skilled | School Education Courses WordPress Theme

Okay, now we’re getting into the more complicated website types: learning management systems (LMS) and online courses. Skilled, a theme from elite WordPress designer Aislin, is one of the top sellers in this category–it’s a great starting point for any LMS website.

16. Elysian

Elysian - WordPress School Theme
Elysian – WordPress School Theme

A WordPress theme that’s easy to install, with modern typography choices and standout design.

17. Course Builder

WordPress LMS Theme for Online Courses Schools  Education  Course Builder
WordPress LMS Theme for Online Courses, Schools & Education | Course Builder

This course builder supports online webinars, comments, email notifications, sales on courses, subscriptions, and more. This is ideal if you’re a teacher or starting your own online school/classes.

18. Guru

Guru  Learning Management WordPress Theme
Guru | Learning Management WordPress Theme

Guru works with it all: WordPress, WooCommerce, WPML, MailChimp, Sensei, and Event Calendar. There’s super-detailed documentation, course analytics, grading, and it’s SEO-friendly.

19. Academy

Academy - Learning Management Theme
Academy – Learning Management Theme

Includes user profiles, lesson plans, blogs, and a built-in media player.

20. Varsita

Varsita - Education Theme A Learning Management System for WordPress
Varsita – Education Theme, A Learning Management System for WordPress

Designed by Themeum, Varsita is a top-seller for schools and online learning systems. It has PayPal payment integration, events management, five homepage variations, and Visual Composer. There’s also a dedicated support forum on their website.

Bonus Section!

Here are a few more WordPress theme picks that we couldn’t fit on the list:


Source: Webdesign Tuts+

How To Design Emotional Interfaces For Boring Apps

How To Design Emotional Interfaces For Boring Apps

How To Design Emotional Interfaces For Boring Apps

Alice Кotlyarenko

2018-04-10T13:20:05+02:00
2018-04-10T14:53:40+00:00

There’s a trickling line of ones and zeros that disappears behind a large yellow tube. A bear pops out of the tube as a clawed paw starts pointing at my browser’s toolbar, and a headline appears, saying: “Start your bear-owsing!”

Between my awwing and oohing I forget what I wanted to browse.

Products like a VPN service rarely evoke endearment — or any other emotion, for that matter. It’s not their job, not what they were built to do. But because TunnelBear does, I choose it over any other VPN and recommend it to my friends, so they can have some laughs while caught up in routine.

tunnelbear

Humans can’t endure boredom for a long time, which is why products that are built for non-exciting, repetitive tasks so often get abandoned and gather dust on computers and phones. But boredom, according to psychologists, is merely lack of stimulation, the unfulfilled desire for satisfying activity. So what if we use the interface to give them that stimulation?

I sat with product designers here at MacPaw, who spend their waking hours designing not-so-sexy things like duplicate finders and encryption apps, and they shared five secrets to more emotional UIs: gamification, humor, animation, illustration, and mascots.

Games People Play

There’s some debate going on around the use of gamification in UIs: 24 empirical studies, for example, arrived at varying conclusions as to how effective it was. But then again, effectiveness depends on what you were trying to accomplish by designing those shiny achievement badges.

For many product creators, including Akar Sumset here, the point of gamification is not letting users have fun per se — it’s gently pushing them towards certain behaviors via said fun. Achievements, ranks, leaderboards tap into the basic human need of esteem, trigger competitiveness, and supposedly urge users to do what you want them to, like make progress, keep coming back to the app, or share it on social media.

Gamification can succeed or fail at that, but what it sure achieves is an emotional response. Our brain is packed full of cells that control the levels of dopamine, one of the major neurochemicals of happiness. When something enjoyable happens, these neurons light up and trigger a release of dopamine into the blood, but what’s even better, if this pleasant event is regular and can be predicted, they’ll light up and release dopamine before it even happens. What does that mean for your interface? That expecting an enjoyable thing such as the next achievement will give the users little shots of happiness throughout their experience with the product.

Gamification in UI: Gemini 2 And Duolingo

When designing Gemini 2, the new version of our duplicate finder for Mac, we had a serious problem at hand. Reviewing gigabytes of files was soul-crushingly boring, and some users complained they quit before they were done. So what we tried to achieve with the achievements system is intensify the feeling of a crossed-out item on a to-do list, which is the only upside of tedious tasks. The space theme, unwittingly set with the app’s name and exploited in the interface, was perfect for gamification. Our audience grew up on Star Wars and Star Trek, so sci-fi inspired ranks would hit home with them.

Within days of the release, we started getting tweets from users asking for clues on the Easter Egg that would unlock the final achievement. A year after the release, Gemini 2 got the Red Dot Award for a design that exhibits “clarity and emotion.” So while it’s hard to measure how motivating our achievement system has been, it sure didn’t leave people cold.


Gamification in UI: Gemini 2

Another product that got it right — and has by far the most gamified interface I’ve seen — is Duolingo, an online service and mobile app for learning languages. Trying to master a foreign tongue from scratch is daunting, especially if it’s just you and your laptop, without the reassurance that comes with having a teacher. Given how quickly people lose interest in their language endeavors (speaking from experience here), Duolingo would have to go out of its way to keep you hooked. And it does.

Whenever you complete a quick 5-minute lesson, you earn 10 points. Take lessons 30 days in a row? Get an achievement. Complete 20 lessons without a single typo? Unlock another. For every baby step you take, your senses are rewarded with triumphant sounds and colorful graphics that trigger the release of that sweet, sweet dopamine. Eventually, you start associating Duolingo with the feeling of accomplishment and pride — the kind of feeling you want to come back to.


Gamification in UI: Duolingo

If you’d like to dive deeper into gamification, Gabe Zichermann’s book “Gamification by Design: Implementing Game Mechanics in Web and Mobile Apps” is a great way to start.

You’ve Got To Be Joking

Victor Yocco has made a solid case for using humor in web design as a tool to create memorable experiences, connect with users, and make your work stand out. But the biggest power of jokes is that they’re emotional. While we still don’t fully understand the nature of humor, one thing is clear: it makes humans happy. According to brain imaging research, funny cartoons activate the reward network in the limbic system — the same network that responds to eating, music, sex, and mood-altering drugs. In other words, a good joke gives people a kind of emotional high.

Would you want that kind of reaction to your interface? Of course. But the tricky part is that not only is humor subjective, but the way we respond to it depends a lot on the context. One thing is throwing in a pun on the launch screen; a completely different is goofing around in an error message. And while all humans enjoy humor in this or that form, it’s vital to know your audience: what they find hilarious and what might seem inappropriate, crude, or poorly timed. Not that different from cracking jokes in real life.

Humor in UI: Authentic Weather and Slack

One app that nails the use of humor — and not just as a complementary comic relief, but as a unique selling proposition — is Authentic Weather. Weather apps are a prime example of utilitarian products: they are something people use to get information, period. But with Authentic Weather, you get a lot more than that. No matter the weather, it’s going to crack you up with a snarky comment like “It’s ducking freezing,” “Go home winter,” and my personal favorite “It’s just okay. Look outside for more information.”

What happens when you use Authentic Weather is you don’t just open it for the forecast — you want to see what it comes up with next, and a routine task like checking the weather becomes a thing to look forward to in the morning. Now, the app’s moody commentary, packed full of f-words and scorn, would probably seem less entertaining to my mom. But being the grumpy millennial that I am, I find it hilarious, which proves humor works if you know your audience.


Humor in UI: Authentic Weather

Another interface that puts fun to good use is Slack’s. For an app people associate with work emergencies, Slack does a solid job creating a more humane experience, not least because of its one-liners. From loading screens to the moments when you’re finally caught up with all your chats, it cracks a joke when you don’t see it coming.

With such a diverse demographic, humor is a hit and miss, so Slack plays safe with goofy puns and good-natured banter — the kind of jokes that don’t exactly send you rolling on the floor but don’t annoy or offend either. In the best case scenario, the user will chuckle and share the screenshot in one of their channels; in the worst case scenario, they’ll just roll their eyes.


Humor in UI: Slack

More on Humor: “Just Kidding: Using Humor Effectively” by Louis R. Franzini.

Get The World Moving

Nearly every interface uses a form of animation. It’s the natural way to transition from one state to another. But animations in UI can serve a lot more purposes than signifying a change of state — they can help you direct attention and communicate what’s going on better than static visuals or copy ever could. The movement stimulates both visual and kinesthetic learning, which means users are more likely to stay focused and figure out how to use the thing.

These are all good reasons to incorporate animation into your design, but why does it elicit emotion, exactly? Simon Grozyan, who worked on our apps Encrypto and Gemini Photos, believes it’s because in the physical world we interpret animated things as alive:

“We are used to seeing things in movement. Everything around us is either moving or changing appearance because of the light. Static equals dead.”

In addition to the relatable, lifelike quality of a moving object, animation has the power of a delightful and unexpected thing that brings us a lot more pleasure than a thing equally delightful but expected. Therefore, by using it in spots less habitual than transitions you can achieve that coveted stimulation that makes your product fun to use.

Animation in UI: Encrypto and Shazam

Encrypto is a tiny Mac app that encrypts and decrypts your files so that you can send them to someone securely. It’s an indispensable tool for those who care about data security and privacy, but not the kind of tool you would feel attached to. Nevertheless, Encrypto is by far my favorite MacPaw app as far as design is concerned, thanks to the Matrix-style animated bar that slides over your file and transforms it into a new secured entity. Encryption comes to life; it’s no longer a dull process on your computer — it’s mesmerizing digital magic.

encrypto

Animation is at the heart of another great UI: that of Shazam, an app you probably have on your phone. When you use Shazam to find out what’s playing, the button you tap starts sending concentric circles outward and inward. This similarity to a throbbing audio speaker makes the interface almost tangible, physical — as if you’re blasting your favorite album on a powerful sound system.

shazam

More on Animation: “How Functional Animation Helps Improve User Experience”.

Art Is Everywhere

As Blair Culbreth argues, polished is no longer enough for interfaces. Sleek, professional design is expected, but it’s the personalized, humane details that users smile at and forward to their friends. Custom art can be this detail.

Unlike generic imagery, illustration is emotional, because it communicates more than meaning. It carries positive associations with cartoons every person used to watch as a child, shows things in a more playful, imaginative way, and, most importantly, contains a touch of the artist’s personality.

“I think when an artist creates an illustration they always infuse some of their personal experience, their context, their story into it,” says Max Kukurudziak, one of our product designers. The theory rings true — a human touch is more likely to stir feelings.

Illustration in UI: Gemini Photos and Google Calendar

One of our newest products Gemini Photos is an iPhone app that helps you clear unneeded photos. Much like Gemini 2 for desktop, it involves some tedious reviewing for the user, so even with a handy and handsome UI, we’d have a hard time holding their attention and generally making them feel good.

Like in many of our previous apps, we used animations and sounds to enliven the interface, but custom art has become the highlight of the experience. As said above, it’s scientifically proven that surprising pleasurable things cause an influx of that happiness chemical into our blood, so by using quirky illustrations in unexpected spots we didn’t just fill up an empty screen — we added a tad of enjoyment to an otherwise monotonous activity.


Illustration in UI: Gemini Photos

One more example of how illustration can make a product more lovable is Google Calendar. Until recently there was a striking difference between the web version and the iOS app. While the former had the appeal of a spreadsheet, the latter instantly won my heart with one killer detail. For many types of events, Google Calendar slips in art that illustrates them, based on the keywords it picks up from event titles. That way, your plans for the week look a lot more exciting, even if all you’ve got going on is the gym and a dentist appointment.

But that’s not even the best thing. I realized that whenever I create a new event, I secretly hope Google Calendar will have art for it and feel genuinely pleased when it does. Just like that, using a calendar stopped being a necessity and became a source of positive emotion. And, apparently, the illustration experiment didn’t work for me alone, because Google recently rolled out the web version of their calendar with the same art.


Illustration in UI: Google Calendar

More on Illustration: “Illustration That Works: Professional Techniques For Artistic And Commercial Success” by Greg Houston.

What A Character

Cute characters that impersonate products have been used in web design and marketing for years (think Ronald McDonald and the Michelin Man). In interfaces — not quite as much. Mascots in UI can be perceived as intrusive and annoying, especially if they distract the user from an important action or obstruct the view. A notorious example of a mascot gone wrong is Microsoft’s Clippy: it evoked nothing but fear and loathing (which, of course, are emotions, but not the kind you’re looking for).

At the same time, studies show that people easily personify things, even if they are merely geometric figures. Lifelike creatures are easier to relate to, understand the behavior of, and generally feel some way about. Moreover, an animated character is easier to attribute a personality to, so you can broadcast the characteristics of your product through that character — make it playful and goofy, eager and helpful, or whatever you need it to be. With that much-untapped potential, mascots are perfect for non-emotional products.

The trick is timing.

Clippy was so obnoxious because he appeared uninvited, interrupted completely unrelated tasks, and was generally in the way. But if the mascot shows up in a relatively idle moment — for example, the user has just completed a task — it will do its endearing job.

Mascots in UI: RememBear and Yelp

TunnelBear Inc. has recently beta launched another utility that’s cute as a button (no pun intended). RememBear is a password manager, and passwords are supposed to be no joke. But the brilliance of bear cartoons in RememBear is that they are nowhere in sight when you do serious, important things like creating a new entry. Instead, you get a bear hug when you’re done with stage one of signing up for the app and haven’t yet proceeded to stage two — saving your first password. By placing the mascot in this spot, RememBear avoided being in the way but made me smile when I least expected it.


Mascots in UI: RememBear

Just like RememBear, Yelp — a widely known app for restaurant reviews — has perfect timing for their mascot. The funny hamster first appeared at the bottom of the iOS app’s settings so that the user would discover it like an Easter egg.

“At Yelp we’re always striving to make our product and brand feel fun and delightful,” says Yoni De Beule, Yelp’s Product Design manager. “We reflect Yelp’s personality in everything from our fun poster designs and funny release notes to internal hackathon projects and Yelp Elite parties. When we found our iPhone settings page to be seriously lacking in the fun department, we decided to roll up our sleeves and fix it.”

The hamster in the iOS app later got company, as the team designed a velociraptor for the Android version and a dog for the web. So whenever — and wherever — you use Yelp, you almost want to run out of recommendations, so that you can see another version of the delightful character.


Mascots in UI: Yelp

If you’d like to learn how to create your own mascot, there’s a nice tutorial by Sirine (aka ‘Miss ChatZ’) on Envato Tuts+.

To Wrap It Up…

Not all products are inherently fun the way games, or social media apps are, but even utilities don’t have to be merely utilitarian. Apps that deal with repetitive tasks often struggle with retaining users: people abandon them because they feel bored, and boredom is simply lack of stimulation. By using positive stimuli like humor, movement, unique art, elements of game, and relatable characters we can make users feel a different way — more excited, less distracted, and ultimately happier.

Further Reading

Smashing Editorial(cc, ra, il)


Source: Smashing Magazine

Supercharge Your Slide Presentations With Ludus

When you think about creating a slide presentation, you probably think of PowerPoint, Keynote, perhaps Google Slides, but in this tutorial I’m going to show you a way to supercharge your presentations with another tool.

Ludus (a web app) will take static presentations to another level, allowing you to add movement and more dynamism to whatever you’re presenting. Take a look at the screencast for full details:

How to Use Ludus for Better Presentations

 

Getting Started With Ludus

Visit ludus.one and hit the Start button to get things underway. 

Note: we’ll be using the free version, which limits us to a single user, 1Gb of storage, and 20 presentations. Upgrading for $99 per year will give us several other options and more flexibility. A teams package is also available.

So, you’ll be asked to create an account (sign in with Facebook if you like) and you can then begin building presentations. 

Either start a new presentation, or opt for the user guide as a starting point, which is the perfect way for Ludus to demonstrate what’s possible and get you onboard.

Ludus onboarding

Hitting New Presentation will bring up a dialogue where you can enter the initial dimensions (choose from presets if you prefer) and the document title.

You’ll enter a document screen, similar in many ways to all your favourite applications; with controls along the top, a toolbar to the left, and panels on the right. At any point you can access all available slides by toggling the pane at the bottom of the screen:

Ludus Tools

The toolbar allows you to add things to the slides. Things like: 

  • Drawing mode, for pencil lines
  • Titles (headings)
  • Paragraphs
  • Basic shapes, like rectangles and circles
  • Arrows
  • Lines
  • Images
  • Videos
  • Code blocks
Ludus tools

To control each one, settings are available in the inspector panels to the right.

Ludus inspector panels

The controls available are quite varied and versatile too; position and scale are fairly standard, but the code blocks have different syntax highlighters to choose from, shapes and images can have blending modes applied, and there are many more examples besides. 

Smart Blocks

At the bottom of the inspection panels you’ll also see a button with + Add Smart Block. This allows you to add elements which behave a little bit like symbols in graphics applications.

Ludus smart blocks

Adding a smart block (such as a common footer, or graphic) to multiple slides will allow you to make a change in one place and see those changes take effect throughout the document.

Slide Transitions

Once your slides are ready for presenting to the world, you can define the transitions (along with the durations) you want between each one. 

Ludus transitions

Pressing the play button at the top of the screen will reveal to you what the transition now looks like.

Show and Tell

Having saved your finished presentation you now need to show it to an audience. Hit Share to grab a URL, or take the embed snippet, or even download an HTML or PDF version just in case you need a version offline. 

Note: be aware that some of the effects and interactivity won’t necessarily work perfectly in the dowloaded versions.

Ludus share options

Conclusion

There’s much more to explore in Ludus; take a look at the screencast for more details and go and have a play yourself!

More Presentation Goodness

The Complete Guide to Making Great Presentations free ebook from Tuts+


Source: Webdesign Tuts+

Designing For Accessibility And Inclusion

Designing For Accessibility And Inclusion

Designing For Accessibility And Inclusion

Steven Lambert

2018-04-09T14:45:39+02:00
2018-04-10T14:53:40+00:00

“Accessibility is solved at the design stage.” This is a phrase that Daniel Na and his team heard over and over again while attending a conference. To design for accessibility means to be inclusive to the needs of your users. This includes your target users, users outside of your target demographic, users with disabilities, and even users from different cultures and countries. Understanding those needs is the key to crafting better and more accessible experiences for them.

One of the most common problems when designing for accessibility is knowing what needs you should design for. It’s not that we intentionally design to exclude users, it’s just that “we don’t know what we don’t know.” So, when it comes to accessibility, there’s a lot to know.

How do we go about understanding the myriad of users and their needs? How can we ensure that their needs are met in our design? To answer these questions, I have found that it is helpful to apply a critical analysis technique of viewing a design through different lenses.

“Good [accessible] design happens when you view your [design] from many different perspectives, or lenses.”

The Art of Game Design: A Book of Lenses

A lens is “a narrowed filter through which a topic can be considered or examined.” Often used to examine works of art, literature, or film, lenses ask us to leave behind our worldview and instead view the world through a different context.

For example, viewing art through a lens of history asks us to understand the “social, political, economic, cultural, and/or intellectual climate of the time.” This allows us to better understand what world influences affected the artist and how that shaped the artwork and its message.

Accessibility lenses are a filter that we can use to understand how different aspects of the design affect the needs of the users. Each lens presents a set of questions to ask yourself throughout the design process. By using these lenses, you will become more inclusive to the needs of your users, allowing you to design a more accessible user experience for all.

The Lenses of Accessibility are:

You should know that not every lens will apply to every design. While some can apply to every design, others are more situational. What works best in one design may not work for another.

The questions provided by each lens are merely a tool to help you understand what problems may arise. As always, you should test your design with users to ensure it’s usable and accessible to them.

Lens Of Animation And Effects

Effective animations can help bring a page and brand to life, guide the users focus, and help orient a user. But animations are a double-edged sword. Not only can misusing animations cause confusion or be distracting, but they can also be potentially deadly for some users.

Fast flashing effects (defined as flashing more than three times a second) or high-intensity effects and patterns can cause seizures, known as ‘photosensitive epilepsy.’ Photosensitivity can also cause headaches, nausea, and dizziness. Users with photosensitive epilepsy have to be very careful when using the web as they never know when something might cause a seizure.

Other effects, such as parallax or motion effects, can cause some users to feel dizzy or experience vertigo due to vestibular sensitivity. The vestibular system controls a person’s balance and sense of motion. When this system doesn’t function as it should, it causes dizziness and nausea.

“Imagine a world where your internal gyroscope is not working properly. Very similar to being intoxicated, things seem to move of their own accord, your feet never quite seem to be stable underneath you, and your senses are moving faster or slower than your body.”

A Primer To Vestibular Disorders

Constant animations or motion can also be distracting to users, especially to users who have difficulty concentrating. GIFs are notably problematic as our eyes are drawn towards movement, making it easy to be distracted by anything that updates or moves constantly.

This isn’t to say that animation is bad and you shouldn’t use it. Instead you should understand why you’re using the animation and how to design safer animations. Generally speaking, you should try to design animations that cover small distances, match direction and speed of other moving objects (including scroll), and are relatively small to the screen size.

You should also provide controls or options to cater the experience for the user. For example, Slack lets you hide animated images or emojis as both a global setting and on a per image basis.

To use the Lens of Animation and Effects, ask yourself these questions:

  • Are there any effects that could cause a seizure?
  • Are there any animations or effects that could cause dizziness or vertigo through use of motion?
  • Are there any animations that could be distracting by constantly moving, blinking, or auto-updating?
  • Is it possible to provide controls or options to stop, pause, hide, or change the frequency of any animations or effects?

Lens Of Audio And Video

Autoplaying videos and audio can be pretty annoying. Not only do they break a users concentration, but they also force the user to hunt down the offending media and mute or stop it. As a general rule, don’t autoplay media.

“Use autoplay sparingly. Autoplay can be a powerful engagement tool, but it can also annoy users if undesired sound is played or they perceive unnecessary resource usage (e.g. data, battery) as the result of unwanted video playback.”

Google Autoplay guidelines

You’re now probably asking, “But what if I autoplay the video in the background but keep it muted?” While using videos as backgrounds may be a growing trend in today’s web design, background videos suffer from the same problems as GIFs and constant moving animations: they can be distracting. As such, you should provide controls or options to pause or disable the video.

Along with controls, videos should have transcripts and/or subtitles so users can consume the content in a way that works best for them. Users who are visually impaired or who would rather read instead of watch the video need a transcript, while users who aren’t able to or don’t want to listen to the video need subtitles.

To use the Lens of Audio and Video, ask yourself these questions:

  • Are there any audio or video that could be annoying by autoplaying?
  • Is it possible to provide controls to stop, pause, or hide any audio or videos that autoplay?
  • Do videos have transcripts and/or subtitles?

Lens Of Color

Color plays an important part in a design. Colors evoke emotions, feelings, and ideas. Colors can also help strengthen a brand’s message and perception. Yet the power of colors is lost when a user can’t see them or perceives them differently.

Color blindness affects roughly 1 in 12 men and 1 in 200 women. Deuteranopia (red-green color blindness) is the most common form of color blindness, affecting about 6% of men. Users with red-green color blindness typically perceive reds, greens, and oranges as yellowish.


Color Blindness Reference Chart for Deuternaopia, Protanopia, and Tritanopia

Deuteranopia (green color blindness) is common and causes reds to appear brown/yellow and greens to appear beige. Protanopia (red color blindness) is rare and causes reds to appear dark/black and orange/greens to appear yellow. Tritanopia (blue-yellow colorblindness) is very rare and cases blues to appear more green/teal and yellows to appear violet/grey. (Source) (Large preview)

Color meaning is also problematic for international users. Colors mean different things in different countries and cultures. In Western cultures, red is typically used to represent negative trends and green positive trends, but the opposite is true in Eastern and Asian cultures.

Because colors and their meanings can be lost either through cultural differences or color blindness, you should always add a non-color identifier. Identifiers such as icons or text descriptions can help bridge cultural differences while patterns work well to distinguish between colors.


Six colored labels. Five use a pattern while the sixth doesn’t

Trello’s color blind friendly labels use different patterns to distinguish between the colors. (Large preview)

Oversaturated colors, high contrasting colors, and even just the color yellow can be uncomfortable and unsettling for some users, prominently those on the autism spectrum. It’s best to avoid high concentrations of these types of colors to help users remain comfortable.

Poor contrast between foreground and background colors make it harder to see for users with low vision, using a low-end monitor, or who are just in direct sunlight. All text, icons, and any focus indicators used for users using a keyboard should meet a minimum contrast ratio of 4.5:1 to the background color.

You should also ensure your design and colors work well in different settings of Windows High Contrast mode. A common pitfall is that text becomes invisible on certain high contrast mode backgrounds.

To use the Lens of Color, ask yourself these questions:

  • If the color was removed from the design, what meaning would be lost?
  • How could I provide meaning without using color?
  • Are any colors oversaturated or have high contrast that could cause users to become overstimulated or uncomfortable?
  • Does the foreground and background color of all text, icons, and focus indicators meet contrast ratio guidelines of 4.5:1?

Lens Of Controls

Controls, also called ‘interactive content,’ are any UI elements that the user can interact with, be they buttons, links, inputs, or any HTML element with an event listener. Controls that are too small or too close together can cause lots of problems for users.

Small controls are hard to click on for users who are unable to be accurate with a pointer, such as those with tremors, or those who suffer from reduced dexterity due to age. The default size of checkboxes and radio buttons, for example, can pose problems for older users. Even when a label is provided that could be clicked on instead, not all users know they can do so.

Controls that are too close together can cause problems for touch screen users. Fingers are big and difficult to be precise with. Accidentally touching the wrong control can cause frustration, especially if that control navigates you away or makes you lose your context.


Tweet that says Software being Done is like lawn being Mowed. Jim Benson

When touching a single line tweet, it’s very easy to accidentally click the person’s name or handle instead of opening the tweet because there’s not enough space between them. (Source) (Large preview)

Controls that are nested inside another control can also contribute to touch errors. Not only is it not allowed in the HTML spec, it also makes it easy to accidentally select the parent control instead of the one you wanted.

To give users enough room to accurately select a control, the recommended minimum size for a control is 34 by 34 device independent pixels, but Google recommends at least 48 by 48 pixels, while the WCAG spec recommends at least 44 by 44 pixels. This size also includes any padding the control has. So a control could visually be 24 by 24 pixels but with an additional 10 pixels of padding on all sides would bring it up to 44 by 44 pixels.

It’s also recommended that controls be placed far enough apart to reduce touch errors. Microsoft recommends at least 8 pixels of spacing while Google recommends controls be spaced at least 32 pixels apart.

Controls should also have a visible text label. Not only do screen readers require the text label to know what the control does, but it’s been shown that text labels help all users better understand a controls purpose. This is especially important for form inputs and icons.

To use the Lens of Controls, ask yourself these questions:

  • Are any controls not large enough for someone to touch?
  • Are any controls too close together that would make it easy to touch the wrong one?
  • Are there any controls inside another control or clickable region?
  • Do all controls have a visible text label?

Lens Of Font

In the early days of the web, we designed web pages with a font size between 9 and 14 pixels. This worked out just fine back then as monitors had a relatively known screen size. We designed thinking that the browser window was a constant, something that couldn’t be changed.

Technology today is very different than it was 20 years ago. Today, browsers can be used on any device of any size, from a small watch to a huge 4K screen. We can no longer use fixed font sizes to design our sites. Font sizes must be as responsive as the design itself.

Not only should the font sizes be responsive, but the design should be flexible enough to allow users to customize the font size, line height, or letter spacing to a comfortable reading level. Many users make use of custom CSS that helps them have a better reading experience.

The font itself should be easy to read. You may be wondering if one font is more readable than another. The truth of the matter is that the font doesn’t really make a difference to readability. Instead it’s the font style that plays an important role in a fonts readability.

Decorative or cursive font styles are harder to read for many users, but especially problematic for users with dyslexia. Small font sizes, italicized text, and all uppercase text are also difficult for users. Overall, larger text, shorter line lengths, taller line heights, and increased letter spacing can help all users have a better reading experience.

To use the Lens of Font, ask yourself these questions:

  • Is the design flexible enough that the font could be modified to a comfortable reading level by the user?
  • Is the font style easy to read?

Lens Of Images and Icons

They say, “A picture is worth a thousand words.” Still, a picture you can’t see is speechless, right?

Images can be used in a design to convey a specific meaning or feeling. Other times they can be used to simplify complex ideas. Whichever the case for the image, a user who uses a screen reader needs to be told what the meaning of the image is.

As the designer, you understand best the meaning or information the image conveys. As such, you should annotate the design with this information so it’s not left out or misinterpreted later. This will be used to create the alt text for the image.

How you describe an image depends entirely on context, or how much textual information is already available that describes the information. It also depends on if the image is just for decoration, conveys meaning, or contains text.

“You almost never describe what the picture looks like, instead you explain the information the picture contains.”

Five Golden Rules for Compliant Alt Text

Since knowing how to describe an image can be difficult, there’s a handy decision tree to help when deciding. Generally speaking, if the image is decorational or there’s surrounding text that already describes the image’s information, no further information is needed. Otherwise you should describe the information of the image. If the image contains text, repeat the text in the description as well.

Descriptions should be succinct. It’s recommended to use no more than two sentences, but aim for one concise sentence when possible. This allows users to quickly understand the image without having to listen to a lengthy description.

As an example, if you were to describe this image for a screen reader, what would you say?


Vincent van Gogh’s The Starry Night

Source (Large preview)

Since we describe the information of the image and not the image itself, the description could be Vincent van Gogh’s The Starry Night since there is no other surrounding context that describes it. What you shouldn’t put is a description of the style of the painting or what the picture looks like.

If the information of the image would require a lengthy description, such as a complex chart, you shouldn’t put that description in the alt text. Instead, you should still use a short description for the alt text and then provide the long description as either a caption or link to a different page.

This way, users can still get the most important information quickly but have the ability to dig in further if they wish. If the image is of a chart, you should repeat the data of the chart just like you would for text in the image.

If the platform you are designing for allows users to upload images, you should provide a way for the user to enter the alt text along with the image. For example, Twitter allows its users to write alt text when they upload an image to a tweet.

To use the Lens of Images and Icons, ask yourself these questions:

  • Does any image contain information that would be lost if it was not viewable?
  • How could I provide the information in a non-visual way?
  • If the image is controlled by the user, is it possible to provide a way for them to enter the alt text description?

Lens Of Keyboard

Keyboard accessibility is among the most important aspects of an accessible design, yet it is also among the most overlooked.

There are many reasons why a user would use a keyboard instead of a mouse. Users who use a screen reader use the keyboard to read the page. A user with tremors may use a keyboard because it provides better accuracy than a mouse. Even power users will use a keyboard because it’s faster and more efficient.

A user using a keyboard typically uses the tab key to navigate to each control in sequence. A logical order for the tab order greatly helps users know where the next key press will take them. In western cultures, this usually means from left to right, top to bottom. Unexpected tab orders results in users becoming lost and having to scan frantically for where the focus went.

Sequential tab order also means that they must tab through all controls that are before the one that they want. If that control is tens or hundreds of keystrokes away, it can be a real pain point for the user.

By making the most important user flows nearer to the top of the tab order, we can help enable our users to be more efficient and effective. However, this isn’t always possible nor practical to do. In these cases, providing a way to quickly jump to a particular flow or content can still allow them to be efficient. This is why “skip to content” links are helpful.

A good example of this is Facebook which provides a keyboard navigation menu that allows users to jump to specific sections of the site. This greatly speeds up the ability for a user to interact with the page and the content they want.


facebook

Facebook provides a way for all keyboard users to jump to specific sections of the page, or other pages within Facebook, as well as an Accessibility Help menu. (Large preview)

When tabbing through a design, focus styles should always be visible or a user can easily become lost. Just like an unexpected tab order, not having good focus indicators results in users not knowing what is currently focused and having to scan the page.

Changing the look of the default focus indicator can sometimes improve the experience for users. A good focus indicator doesn’t rely on color alone to indicate focus (Lens of Color), and should be distinct enough to easily allow the user to find it. For example, a blue focus ring around a similarly colored blue button may not be visually distinct to discern that it is focused.

Although this lens focuses on keyboard accessibility, it’s important to note that it applies to any way a user could interact with a website without a mouse. Devices such as mouth sticks, switch access buttons, sip and puff buttons, and eye tracking software all require the page to be keyboard accessible.

By improving keyboard accessibility, you allow a wide range of users better access to your site.

To use the Lens of Keyboard, ask yourself these questions:

  • What keyboard navigation order makes the most sense for the design?
  • How could a keyboard user get to what they want in the quickest way possible?
  • Is the focus indicator always visible and visually distinct?

Lens Of Layout

Layout contributes a great deal to the usability of a site. Having a layout that is easy to follow with easy to find content makes all the difference to your users. A layout should have a meaningful and logical sequence for the user.

With the advent of CSS Grid, being able to change the layout to be more meaningful based on the available space is easier than ever. However, changing the visual layout creates problems for users who rely on the structural layout of the page.

The structural layout is what is used by screen readers and users using a keyboard. When the visual layout changes but not the underlying structural layout, these users can become confused as their tab order is no longer logical. If you must change the visual layout, you should do so by changing the structural layout so users using a keyboard maintain a sequential and logical tab order.

The layout should be resizable and flexible to a minimum of 320 pixels with no horizontal scroll bars so that it can be viewed comfortably on a phone. The layout should also be flexible enough to be zoomed in to 400% (also with no horizontal scroll bars) for users who need to increase the font size for a better reading experience.

Users using a screen magnifier benefit when related content is in close proximity to one another. A screen magnifier only provides the user with a small view of the entire layout, so content that is related but far away, or changes far away from where the interaction occurred is hard to find and can go unnoticed.

GIF of CodePen showing that clicking on a button does not update the interface
When performing a search on CodePen, the search button is in the top right corner of the page. Clicking the button reveals a large search input on the opposite side of the screen. A user using a screen magnifier would be hard pressed to notice the change and would think the button doesn’t work. (Large preview)

To use the Lens of Layout, ask yourself these questions:

  • Does the layout have a meaningful and logical sequence?
  • What should happen to the layout when it’s viewed on a small screen or zoomed in to 400%?
  • Is content that is related or changes due to user interaction in close proximity to one another?

Lens Of Material Honesty

Material honesty is an architectural design value that states that a material should be honest to itself and not be used as a substitute for another material. It means that concrete should look like concrete and not be painted or sculpted to look like bricks.

Material honesty values and celebrates the unique properties and characteristics of each material. An architect who follows material honesty knows when each material should be used and how to use it without tarnishing itself.

Material honesty is not a hard and fast rule though. It lies on a continuum. Like all values, you are allowed to break them when you understand them. As the saying goes, they are “more what you’d call “guidelines” than actual rules.”

When applied to web design, material honesty means that one element or component shouldn’t look, behave, or function as if it were another element or component. Doing so would cheat the user and could lead to confusion. A common example of this are buttons that look like links or links that look like buttons.

Links and buttons have different behaviors and affordances. A link is activated with the enter key, typically takes you to a different page, and has a special context menu on right click. Buttons are activated with the space key, used primarily to trigger interactions on the current page, and have no such context menu.

When a link is styled to look like a button or vise versa, a user could become confused as it does not behave and function as it looks. If the “button” navigates the user away unexpectedly, they might become frustrated if they lost data in the process.

“At first glance everything looks fine, but it won’t stand up to scrutiny. As soon as such a website is stress‐tested by actual usage across a range of browsers, the façade crumbles.”

Resilient Web Design

Where this becomes the most problematic is when a link and button are styled the same and are placed next to one another. As there is nothing to differentiate between the two, a user can accidentally navigate when they thought they wouldn’t.


Three links and/or buttons shown inline with text

Can you tell which one of these will navigate you away from the page and which won’t? (Large preview)

When a component behaves differently than expected, it can easily lead to problems for users using a keyboard or screen reader. An autocomplete menu that is more than an autocomplete menu is one such example.

Autocomplete is used to suggest or predict the rest of a word a user is typing. An autocomplete menu allows a user to select from a large list of options when not all options can be shown.

An autocomplete menu is typically attached to an input field and is navigated with the up and down arrow keys, keeping the focus inside the input field. When a user selects an option from the list, that option will override the text in the input field. Autocomplete menus are meant to be lists of just text.

The problem arises when an autocomplete menu starts to gain more behaviors. Not only can you select an option from the list, but you can edit it, delete it, or even expand or collapse sections. The autocomplete menu is no longer just a simple list of selectable text.


With the addition of edit, delete, and profile buttons, this autocomplete menu is materially dishonest. (Large preview)

The added behaviors no longer mean you can just use the up and down arrows to select an option. Each option now has more than one action, so a user needs to be able to traverse two dimensions instead of just one. This means that a user using a keyboard could become confused on how to operate the component.

Screen readers suffer the most from this change of behavior as there is no easy way to help them understand it. A lot of work will be required to ensure the menu is accessible to a screen reader by using non-standard means. As such, it will might result in a sub-par or inaccessible experience for them.

To avoid these issues, it’s best to be honest to the user and the design. Instead of combining two distinct behaviors (an autocomplete menu and edit and delete functionality), leave them as two separate behaviors. Use an autocomplete menu to just autocomplete the name of a user, and have a different component or page to edit and delete users.

To use the Lens of Material Honesty, ask yourself these questions:

  • Is the design being honest to the user?
  • Are there any elements that behave, look, or function as another element?
  • Are there any components that combine distinct behaviors into a single component? Does doing so make the component materially dishonest?

Lens Of Readability

Have you ever picked up a book only to get a few paragraphs or pages in and want to give up because the text was too hard to read? Hard to read content is mentally taxing and tiring.

Sentence length, paragraph length, and complexity of language all contribute to how readable the text is. Complex language can pose problems for users, especially those with cognitive disabilities or who aren’t fluent in the language.

Along with using plain and simple language, you should ensure each paragraph focuses on a single idea. A paragraph with a single idea is easier to remember and digest. The same is true of a sentence with fewer words.

Another contributor to the readability of content is the length of a line. The ideal line length is often quoted to be between 45 and 75 characters. A line that is too long causes users to lose focus and makes it harder to move to the next line correctly, while a line that is too short causes users to jump too often, causing fatigue on the eyes.

“The subconscious mind is energized when jumping to the next line. At the beginning of every new line the reader is focused, but this focus gradually wears off over the duration of the line”

— Typographie: A Manual of Design

You should also break up the content with headings, lists, or images to give mental breaks to the reader and support different learning styles. Use headings to logically group and summarize the information. Headings, links, controls, and labels should be clear and descriptive to enhance the users ability to comprehend.

To use the Lens of Readability, ask yourself these questions:

  • Is the language plain and simple?
  • Does each paragraph focus on a single idea?
  • Are there any long paragraphs or long blocks of unbroken text?
  • Are all headings, links, controls, and labels clear and descriptive?

Lens Of Structure

As mentioned in the Lens of Layout, the structural layout is what is used by screen readers and users using a keyboard. While the Lens of Layout focused on the visual layout, the Lens of Structure focuses on the structural layout, or the underlying HTML and semantics of the design.

As a designer, you may not write the structural layout of your designs. This shouldn’t stop you from thinking about how your design will ultimately be structured though. Otherwise, your design may result in an inaccessible experience for a screen reader.

Take for example a design for a single elimination tournament bracket.


Eight person tournament bracket featuring George, Fred, Linus, Lucy, Jack, Jill, Fred, and Ginger. Ginger ultimately wins against George.

Large preview

How would you know if this design was accessible to a user using a screen reader? Without understanding structure and semantics, you may not. As it stands, the design would probably result in an inaccessible experience for a user using a screen reader.

To understand why that is, we first must understand that a screen reader reads a page and its content in sequential order. This means that every name in the first column of the tournament would be read, followed by all the names in the second column, then third, then the last.

“George, Fred, Linus, Lucy, Jack, Jill, Fred, Ginger, George, Lucy, Jack, Ginger, George, Ginger, Ginger.”

If all you had was a list of seemingly random names, how would you interpret the results of the tournament? Could you say who won the tournament? Or who won game 6?

With nothing more to work with, a user using a screen reader would probably be a bit confused about the results. To be able to understand the visual design, we must provide the user with more information in the structural design.

This means that as a designer you need to know how a screen reader interacts with the HTML elements on a page so you know how to enhance their experience.

  • Landmark Elements (header, nav, main, and footer)
    Allow a screen reader to jump to important sections in the design.
  • Headings (h1h6)
    Allow a screen reader to scan the page and get a high level overview. Screen readers can also jump to any heading.
  • Lists (ul and ol)
    Group related items together, and allow a screen reader to easily jump from one item to another.
  • Buttons
    Trigger interactions on the current page.
  • Links
    Navigate or retrieve information.
  • Form labels
    Tell screen readers what each form input is.

Knowing this, how might we provide more meaning to a user using a screen reader?

To start, we could group each column of the tournament into rounds and use headings to label each round. This way, a screen reader would understand when a new round takes place.

Next, we could help the user understand which players are playing against each other each game. We can again use headings to label each game, allowing them to find any game they might be interested in.

By just adding headings, the content would read as follows:

“__Round 1, Game 1__, George, Fred, __Game 2__, Linus, Lucy, __Game 3__, Jack, Jill, __Game 4__, Fred, Ginger, __Round 2, Game 5__, George, Lucy, __Game 6__, Jack, Ginger, __Round 3__, __Game 7__, George, Ginger, __Winner__, Ginger.”

This is already a lot more understandable than before.

The information still doesn’t answer who won a game though. To know that, you’d have to understand which game a winner plays next to see who won the previous game. For example, you’d have to know that the winner of game four plays in game six to know who advanced from game four.

We can further enhance the experience by informing the user who won each game so they don’t have to go hunting for it. Putting the text “(winner)” after the person who won the round would suffice.

We should also further group the games and rounds together using lists. Lists provide the structural semantics of the design, essentially informing the user of the connected nodes from the visual design.

If we translate this back into a visual design, the result could look as follows:


The tournament bracket

The tournament with descriptive headings and winner information (shown here with grey background). (Large preview)

Since the headings and winner text are redundant in the visual design, you could hide them just from visual users so the end visual result looks just like the first design.

“If the end result is visually the same as where we started, why did we go through all this?” You may ask.

The reason is that you should always annotate your design with all the necessary structural design requirements needed for a better screen reader experience. This way, the person who implements the design knows to add them. If you had just handed the first design to the implementer, it would more than likely end up inaccessible.

To use the Lens of Structure, ask yourself these questions:

  • Can I outline a rough HTML structure of my design?
  • How can I structure the design to better help a screen reader understand the content or find the content they want?
  • How can I help the person who will implement the design understand the intended structure?

Lens Of Time

Periodically in a design you may need to limit the amount of time a user can spend on a task. Sometimes it may be for security reasons, such as a session timeout. Other times it could be due to a non-functional requirement, such as a time constrained test.

Whatever the reason, you should understand that some users may need more time in order finish the task. Some users might need more time to understand the content, others might not be able to perform the task quickly, and a lot of the time they could just have been interrupted.

“The designer should assume that people will be interrupted during their activities”

— The Design of Everyday Things

Users who need more time to perform an action should be able to adjust or remove a time limit when possible. For example, with a session timeout you could alert the user when their session is about to expire and allow them to extend it.

To use the Lens of Time, ask yourself this question:

  • Is it possible to provide controls to adjust or remove time limits?

Bringing It All Together

So now that you’ve learned about the different lenses of accessibility through which you can view your design, what do you do with them?

The lenses can be used at any point in the design process, even after the design has been shipped to your users. Just start with a few of them at hand, and one at a time carefully analyze the design through a lens.

Ask yourself the questions and see if anything should be adjusted to better meet the needs of a user. As you slowly make changes, bring in other lenses and repeat the process.

By looking through your design one lens at a time, you’ll be able to refine the experience to better meet users’ needs. As you are more inclusive to the needs of your users, you will create a more accessible design for all your users.

Using lenses and insightful questions to examine principles of accessibility was heavily influenced by Jesse Schell and his book “The Art of Game Design: A Book of Lenses.”

Smashing Editorial(il, ra, yk)


Source: Smashing Magazine