(Gen)AI at ONE – A Front-end Viewpoint

01-11-2024 • Roy van de Mortel

Looking back on the first day of OutSystems NextStep Experience (ONE) 2024 through a front-end lens, I’m left feeling the Hypetr-AI-n hasn’t passed every station in the software development cycle yet. Can it? Should it? Or do we have a long (rail)road of maturation to do before AI can enter creative design spaces at all? There’s certainly enough potential, but since I haven’t heard a single mention of AI during the entire front-end track I wonder why this is.

Generate

Let’s start with the first of the 4 AI Pillars that the announced OS Mentor will assist us with: Generate. The most obvious Generative AI implementation in the Front-end realm has to be content generation. This can be anything from low-fidelity prototypes to highly detailed user interfaces adhering to an uploaded branding document. All ready to be implemented. Cool prospect for sure, but should that be the starting point? This might be an unpopular opinion, but this seems one of the least imaginary adaptations of AI to me since AI cannot invent something new.

It seems I’m not alone in thinking AI isn’t making me redundant any time soon (Figure 2). It’s much more sensible to give away the boring tasks first. Why should we take the fun out of creation and hand it over to AI?

Figure 2 – Poll at OS One: Should AI replace human insight in design?

I believe the intrinsic motivation that comes from creating, is solely responsible for innovations in any field. I come from a background in Interaction-, Level- and Game design, but this holds true for software all the same. Inventing creates the very same fuel that AI models use to generate ‘new’ designs. Stopping or delaying this will stall innovations, as any design current AI models come up with will be based on existing best practices and innovative ideas developed by inspired people.

Original thought is quintessential to solving new problems and moving forward, especially in a fast-moving dynamic IT landscape. So what would be a more useful Generative AI implementation in our realm? Work automation?

“Do it right quickly, or fail fast and do it again.” (Luis Blando, CPTO)

For those who haven’t seen the OutSystems Mentor demo following the keynote yet; A short prompt is given along with a concise requirement document. What ensues seems magic; an entire data model is created along with a fully functional multi-screen application at the press of a button.

This obviously sparks imagination, but at the same time, I can’t help but think whether it will work with more complex real-world examples. How many attempts do I need to write a requirement doc before it’s interpreted correctly? Or do I accept any flaws after my first 5 tries and fix the rest manually? I cannot shake the impression that this technology will simply shift the type of work I do from creating a few quick master-detail or list screens to adjusting prompts, rewriting requirements, or manually checking descriptions, configurations, and security settings alike. I had similar reservations with a previous endeavor in work automation: fetching data in an aggregate using natural language. It sort of works; it’s awesome. However, it requires quite specific prompts and use cases to add value for everyone. Again. are these growing pains that come with any disruptive new technology or signs of something else?

I’m positive Mentor can quickly scaffold 10 entities and 20 screens adequately, but this has never been where the challenges of our work lie. It might relieve us from time spent on these tasks which is then freed up for more meaningful work that adds more value. But to be honest I like to be in control of what is created. Losing overview through a large black-boxed transaction doesn’t seem preferable over methodically and manually performing some minor tasks. This ensures I’m keeping track of everything through a mental checklist. Admittedly, part of this might just be me in denial refusing to get with the times. A hard thing to admit as a millennial.

There’s undoubtedly a specific type of project in combination with a specific type of development team that could benefit from this, but the devil is in the details and the 90/10 rule (or 80/20 Pareto principle) is a cliche in IT for a reason. It seems only the simple stuff, I don’t need help with, works perfectly.

Embed & Iterate

Although merging these two pillars might be limiting them due to my limited imagination, I will do it anyway. Why? Because a person writing a blog post can choose to do so whereas ChatGPT would have blindly and blandly summarized the bullet points.

Embedding refers to the integration of AI agents inside your app. Among other things, this includes our beloved Chatbots. I can’t think of many applications regarding front-end, though it would be cool to have a bot change the look and feel of any application on the fly.

As for Iterate: there was another poll at OS One (Figure 2) with similar results that asked:

Will AI take over user testing?

Few visitors believed so, which I agree with. After all, what is the value of iteration without user input? If you feed an AI with the same source, the result will be no different. Don’t get me wrong, I think automated testing can be super valuable, especially in combination with AI as a validation tool. However, for usability testing, we should always ask the actual user since we’re measuring an (almost unquantifiable) experience.

Validate

Lastly, Validate – In my opinion the most interesting intersection of AI and front-end. This is where my skepticism shifted towards excitement because of all the potential.

We could use it as a code validation tool in CSS or JS, which is probably already happening somewhere. This is helpful, not least because it trains better programmers enabling them to push boundaries more quickly.

Or maybe AI can validate difficult-to-spot UI inconsistencies such as pixel precision margin offsets or slightly off-brand colors which can hardly be differentiated on cheap monitors. Weird things can happen when CSS or inline styling starts to live a life on its own after a while and these types of issues can become difficult to manage and resolve. Especially if you throw highly adaptive UI across multiple devices and aspect ratios into the mix. This begs to be validated automatically by something other than human eyes.

Another talk on Accessibility by Yalda Akhgar made me realize this is also a very good candidate to throw some computing power against. It’s an often-overlooked field when dealing with the (monetary) constraints of a real-world project, but surely the WGAC  (Web Content Accessibility Guidelines) as a clear ruleset must be a perfect candidate to validate your software through automatic testing. Maybe even implementing common accessibility patterns at runtime at the request of users.

Perhaps I’m not necessarily a skeptic, but simply disappointed we’re not moving fast enough.

OS One: Don’t let a Lack of accessibility Bankrupt your Project – Yalda Akhgar