Research Update
December 19, 2022
The Shortcuts app, reimagined.
Interventions on the Shortcuts app, built on top of the app through Siri shortcuts to aid in action discovery and shortcut troubleshooting, made the application easier to use both for novice and advanced users.
Project Dates: November 2022 - December 2022
Built with: Siri Shortcuts, SwiftUI, Python
A screenshot of the redesigned Shortcuts app showing search results for "insert value."
The Shortcuts app is an application distributed by Apple across its platforms and enables users to extend the functionality of their devices with personalized shortcuts. User evaluation showed that the current version of the application lacked adequate tools to discover actions, the building blocks of shortcuts, and debug user-created shortcuts. In this project, I introduced extra functionality to aid in action discovery and shortcut troubleshooting, built on top of the Shortcuts app through Siri shortcuts. Then I performed a user study and empirically verified that the proposed interventions enhanced the Shortcuts experience both for novice and advanced users. What remains to be seen is the impact on the user experience after the interventions’ integration in the Shortcuts application.
About the Shortcuts app
Shortcuts is a visual scripting application developed by Apple and provided on its iOS, iPadOS, macOS, and watchOS operating systems. The app lets users create macros for executing specific tasks on their devices.
An action, the building block of a shortcut, is a single step in a Shortcuts task. These task sequences can be created by the user using action “blocks” and shared online for other users to use. A number of curated shortcuts can also be downloaded from the integrated Gallery section of the Shortcuts app.
Building a shortcut with 3 actions already placed. Source: Apple
Discovering user needs
To understand how useful and efficient the current iteration of the Shortcuts app is for users, I invited a diverse group of iOS users to participate in a usability evaluation study. The group was comprised of users that had either never used the Shortcuts app, only used it to run shortcuts provided by installed apps, or used it to create their own shortcuts. I limited enrollment to the study to iOS users for at least six months.to ensure that the participants were familiar with the iOS paradigm and mental models, which every iOS app, including Shortcuts, is based on.
I asked participants to perform three tasks:
Open the Shortcuts app and create an empty shortcut.
Create a Shortcut that performs simple calculations with an input (here, tipping amount based on purchase price).
Create a Shortcut that performs a complex action with conditional logic (here, send a custom text to a contact based on the time of day).
Overall, participants from all groups were able to perform or complete some or all the tasks. As expected, most users found the creation of the messaging shortcut the hardest, while many managed to create a working version of the tipping shortcut. Users that hadn't used Shortcuts before had the hardest time creating correct shortcuts, while all users with prior Shortcut creation experience managed to create a correct tipping shortcut. Finally, even though all users with prior Shortcut creation were able to create the messaging shortcut, two of them were not able to fully debug the shortcut they created.
I performed interviews with each participant after they completed the challenge and noticed the following trends:
Participants found it hard to discover new actions that may help expand the functionality of the shortcut they have created.
Participants had a hard time searching for actions that will perform a specific task. Many times, the search query they entered did not match the name or description of an action, so it did not show up.
Debugging a long or complicated shortcut took a long time for many participants. Experienced Shortcuts users interjected “Quick Look” actions as a way to check the value of variables, in an attempt to simulate breakpoint debugging.
Participants found it hard to debug their Shortcuts, as they could not test the Shortcut by providing pre-determined input without triggering Shortcut actions like sending messages.
All in all, two directions of improvement emerged for Shortcuts: discovering actions and debugging.
Implementing the interventions
Since it was impossible to directly modify the Shortcuts app interface, I prototyped my design changes through Siri shortcuts that were triggered when the user said a specific phrase. In some cases, when the user interacted with the screen of the intervention, they were redirected back to the Shortcuts app and some actions were performed using AssistiveTouch, simulating the actions the app would support if the implementation was integrated into the Shortcuts app. There were four interventions developed, corresponding to the user evaluation feedback gathered.
Interventions to aid action discovery
Action autosuggestions
This intervention augments the current “Next Action Suggestions” panel in the Shortcuts app. The panel presents three or four action suggestions that the user might find useful to integrate into their app. The action autosuggestions intervention used an Action Relational Model (ARM) that I created from user submissions in the r/Shortcuts subreddit. In short, I parsed the user-submitted shortcuts to train a prediction model to suggest future actions based on the actions the user had already placed. See the full report in the resources section below for technical details.
Users bring up the Siri shortcut by saying “Hey Siri, show me suggestions for [action name].” The shortcut shows a list of clickable suggested actions. Once the user selects an action from the list, they are redirected back to the Shortcuts app and the selected action is searched for and added to the shortcut using AssistiveTouch.
Example of an autosuggestion prompt.
Natural language action suggestions
The current search function requires users to search using keywords included in the name or short action description. For instance, if a user searched for “insert value” hoping to find the action to create a new dictionary value (“Set Dictionary Value”), the search would return no results. This creates for a sub-optimal user experience, especially for novice or intermediate users who are not familiar with exact action names. The natural language action suggestions intervention makes use of the Stanford Named Entity Recognizer (CRFClassifier) and the action name and descriptions provided in the Shortcuts app. The action name and descriptions provided a model on which the CRFClassifier was able to identify the relevant action based on the user’s search query.
Users bring up the shortcut by saying “Hey Siri, search for action.” The shortcut shows an input filed, where the user types the search query they would have typed in the actual shortcut search field. The shortcut then presents the user with a list of clickable suggested actions. Once the user selects an action from the list, they are redirected back to the Shortcuts app and the selected action is searched for and added to the shortcut using AssistiveTouch.
Example of a search results action prompt for the search query "insert value."
Interventions to aid debugging
Flowchart debugger
The flowchart debugger intervention makes use of the same Shortcuts parser that was used to parse the data from the r/Shortcuts subreddit. The data is then served to the Flowchart shortcut, which creates an easy to follow diagram that highlights relationships between data. For instance, if-statement blocks are prominently displayed, and variable relationships are depicted with arrows.
Test case debugger
The test-case debugger intervention also makes use of the Shortcuts parser. The data is provided to the Test case shortcut, which uses the initial values and implements a “Quick Look” step between each action for step-by-step debugging (similar to the breakpoint method).
Users bring up the shortcut by saying “Hey Siri, test my shortcut.” The shortcut shows a menu for input variables. Once the user inputs the initial variable conditions, they are presented with consecutive “Quick Look” results, which show the current condition of each variable after each action. More importantly, the Test case debugger skips all outwards-facing actions, which allows for repeated test-case debugging.
Example of a test case debugger prompt for the circumference example shortcut used in the intervention user evaluation study.
Evaluating the interventions
To maintain consistency between the two studies, I invited the same group of 15 participants that participated in the initial usability evaluation study. The purpose of this user evaluation study was to understand the user’s attitude towards the interventions and compare their stance to the original design, specifically when creating and using user-created shortcuts.
I asked participants to perform two tasks, equivalent to the last two of the original user evaluation:
Create a Shortcut that performs simple calculations with an input (here, the circumference and area of a circle). Participants were instructed to use the action autosuggestions and natural language action suggestions.
Create a Shortcut that performs a complex action with conditional logic (here, send a custom email to a contact including the contact's name). Participants were instructed to use the flowchart and test case debugger.
Note that the first shortcut creation task contains specific action hints that users could use, while the second one is open-ended, to evaluate the action-discovery functionality. Users were instructed to test and debug both of their created shortcuts to evaluate the debugging functionality.
All participants showed an improved ability to perform or complete some or all the tasks. Significantly, participants noted they were up to two times more likely to use the redesigned app in the future compared to their original app experience. See the full report in the resources section below for detailed statistics.
During interviews after the challenges, participants noted that the action autosuggestions were more relevant compared to the organic suggestions within the Shortcuts app. The search function particularly interested users who were using Shortcuts for the first time and users who only executed Shortcuts, who were able to discover what functions were available to help them create the requested shortcuts. Users with prior Shortcut creation experience made better and more frequent use of the debugging interventions.
Tensions between discoverability and complexity
The results of the small intervention user evaluation study point to a general design principle: a product is as good as its discoverability. As reported in the initial user study, participants found it hard to discover the entirety of the app's functionality through the current “Next Action Suggestions” panel and the search function. The discoverability interventions aid in bridging the gap, especially for new or simple-use Shortcuts users.
At the same time, the second user evaluation study also highlights a tension in design. It is clear that Apple designers were aware of additional functionality they could implement, similar to the proposed debugging interventions. After all, other professional software shipped by the company feature advanced capabilities that satisfy even the most demanding users. However, the Shortcuts app is not an app for professionals. It’s a stock app that comes pre-installed with every iPhone. Therefore, the app should be approachable and useful for every user. There comes, then, the hard decision to filter in enough features that make the app powerful and useful, but filter out other features that would overwhelm most users. More exploration is due here, but a way to satisfy both sides is to hide some complexity and let advanced users opt in for it.
Limitations and future work
The results of this project confirm the research hypothesis: the Shortcuts app's utility will increase with more advanced action discovery and debugging capabilities. In companionship to the stock Shortcuts app, the intervention provides a data-driven action autosuggestion, a powerful actions search function, and a flowchart and test-case debugger. The results provide empirical evidence that these interventions increase the perceived utility of the Shortcuts app and enable more users to achieve their automation goals with the app. Implementing these changing directly into the Shortcuts app can amplify the effect of these interventions and enable more users to take full advantage of the device in their pocket.
The implementation of the interventions was an empirically powerful way to investigate whether such additions to the Shortcut app would increase its utility, but it is obvious that future work needs to address several shortcomings. Specifically, as noted many times in the intervention design section, the intervention was not directly applied in the Shortcuts app user interface. Therefore, the effects of directly embedding and replacing features in the stock Shortcuts app need to be investigated. Future work also needs to clarify whether the addition of these interventions adds to the cognitive load experienced, which might deter some novice users from using the Shortcuts app altogether. It is important to note that such a sentiment did not emerge from the intervention user evaluation, but a study among a larger user group should offer more insight. Last but not least, a further step in the exploration of these interventions needs to include their implementation in the Shortcuts app across Apple's platforms.
Notes
I would like to personally thank Prof. Elena Glassman, who wrote and provided helpful comments throughout the ideation process of this project.