Hello,
I've created web app called Resample as a personal project. It's an in-browser Digital Audio Workstation (DAW), similar to GarageBand, Ableton, and other DAW software. My goal is to create a music authoring tool that lowers the barrier to entry for anyone curious about music production.
Since Resample is a web app, no upfront installation is required. Users can browse an online library of sounds and add sounds to a song grid to create a unique musical composition. During playback, each sound is automatically harmonized and tempo matched with a target bpm and music key. The target bpm and tempo are selected based on the first sound added to the song grid.
Here is a high-level list of features currently supported by the app:
- Users can browse a list of songs and view a collection of sounds associated with each song.
- Users can double-select each sound to initiate an audible preview.
- Users can add sounds to a song grid by first selecting a sound, and then selecting a corresponding cell on the song grid.
- Users can then initiate playback of their arrangement by pressing the play button, located in the nav bar.
Song grid cells are organized by row and column. The rows correspond to tracks, and the columns represent different points in time in the arrangement. Currently all sounds have the same fixed duration in beats, so the columns are uniformly distributed, and they are numbered from 1 to 12.
For me, building this app has been a journey of learning a host of different technologies to create a bare minimum prototype of my vision. I've reached a point where the core functionality of the app is beginning to stabilize. I have several features planned, most notably: drag and drop support, manual bpm and key selection, and track volume settings. At the moment, my highest priority is accessibility. I want to make sure that my app's user experience is foundationally sound from an assistive technology perspective.
In summary, I'm posting here in hopes of collecting meaningful feedback on the overall accessibility of my app. While I have taken measures to make my app keyboard accessible, I'm aware there is more to accessibility than being technically correct. Below are a few questions I have about building an accessible music creation tool, in the context of my own app:
- What is the best way to implement keyboard-accessible Drag-and-Drop-like behavior?
The UI for Resample allows users to first select a target sound and then select a vacant song grid cell to place the sound. When successful, the song grid cell then becomes occupied. At the time, sounds in occupied song grid cells cannot be replaced, they can only be removed.
Is this a sufficient, keyboard-accessible way to model editing a song? Can anything be done to make the song creation UX more efficient?
- Button states changing functionality on click
To initiate a sound preview, you must first select a song, then select a sound by pressing it's corresponding button, and finally press the sound's button again to activate a preview. In this scenario, the button's functionality changes as you press it. Is this good, keyboard-accessible UX? Should the button title change on each press, even when the button currently has focus? I'm hoping to make the state change clear to the user via screen reader.
- Would skip links be helpful navigating the different sections of the app?
Currently the app has 3 primary regions: the Control Bar, the Sound Explorer, and the Song Grid. One thing I noticed while testing keyboard accessibility of the app is when there are a lot of sounds, it can take several tab presses to get to the Song Grid region after selecting a sound. Also navigating the grid itself can take time since there are 48 cells in total.
I'm wondering if skip links would help here, for example once a track has been focused, offering a "Skip to next track" link may be useful.
Alternatively, I've been thinking that hierarchial navigation could work well. For example, only high level regions can be made focusable initially, and the user can drill into each region to view its contents. They would then have the opportunity to break out of the region after taking a certain action e.g. after selecting a sound, an adjacent skip link may be provided to "Skip to song grid."
I'm most interested in formalizing a solution for region-based navigation like this since it will serve as a foundation for many other features I'd like to build that involve back and forth interaction between the Sound Explorer and the Song Grid.
I appreciate any and all help! I do have a ton more questions in general, but I'm hoping this is a good start. Even an honest critique of the overall application would be greatly appreciated.
Thank you
Comments
Amazing! Though I wouldn't…
Amazing! Though I wouldn't use the full capacities I've bee waiting for something like this on the web for quite sometimes now. I'll be testing its accessibility for you on voiceover on mac and chromevox on chromeos.
Also I think it'd be better if you move this thread into app dev and prog.
Thanks!
Awesome!
Thanks! I'll definitely move this over to the other thread. I wasn't sure at first which thread would be best, app dev and prog makes the most sense.