Back to Projects

Team Name:

Blue Horizon


Team Members:


Evidence of Work

Code Blue

Project Info

Team Name


Blue Horizon


Team Members


samson

Project Description


Imagine a web app where users can effortlessly upload style guides and documents for review, powered by the intelligence of Meta LLAMA 8B. Behind the scenes, three specialized AI agents collaborate in an autonomous, agentic flow to rigorously evaluate the document against the uploaded style guides. These agents break down the document, pinpointing discrepancies, improvements, and suggestions—all mapped to specific page numbers and precise locations. The app then generates an in-depth, actionable review, which can be easily exported as a text file. The result? A seamless, efficient process for refining documents with precision, saving you time while elevating your writing.

https://www.youtube.com/watch?v=P-CkbFv-CVM


#data_sets/1533

Data Story


I’ve made every effort to address the requests from The Treasury and The Public Services Commission. Based on my understanding, their main goal is to use AI agents to help create, edit, and review documents following Plain Language guidelines outlined in their style manual.

The challenge with government documents is that they’re subject to strict data privacy laws. Using cloud solutions would almost certainly cause issues since the data is typically processed overseas, making it unlikely to pass government scrutiny. Additionally, there’s the cybersecurity risk of protecting a large language model from prompt-hacking. To address these issues, I needed a solution that could run locally, on the government’s internal network, using affordable technology. It also had to be secure from prompt-hackers and flexible enough to work with multiple style guides, enhancing the document review process with AI agents.

This is where Code Blue, the document review multi-tool, comes into play.

Code Blue is a Python application that accepts style guides as user input. Once the style guides are uploaded, LM Studio processes the documents by creating embeddings (or chunks) and pulls together a unified list of rules from all the style guides. The LLAMA 3 language model then uses these rules to review a document uploaded by the user. The output is a text file that includes recommendations, noting specific paragraphs, page numbers, and suggested improvements.

Project Guidelines and How I Addressed These:

Competition Request:
"We are looking for creative applications of AI to enhance clear, compelling communication for different audiences."

My Response:
To address this, I included a prompt specifically recommending the use of Australian English for an Australian audience. I didn't embed this prompt directly in the system as it could compromise the quality of the content. Instead, users can provide this input as needed.

Competition Request:
"Ensure accuracy of content."

My Response:
I've taken multiple steps to ensure accuracy. The review process uses three LLAMA agents that independently assess the document. While accuracy can be enhanced by better models, the solution is model-agnostic, meaning we can easily switch to an improved model if needed.

Competition Request:
"Accessibility for inclusion."

My Response:
The Plain Language style guide discouraged using PDF output due to accessibility concerns. To address this, I chose to generate text files, which are accessible to a broader audience. I also incorporated multiple plain language style guides to ensure that the tool adheres to the widest possible set of plain language rules.

Competition Request:
"Support an agentic flow."

My Response:
This is a multi-agent system. One LLAMA agent processes the style guides and creates a ruleset, while three additional LLAMA agents apply these rules to the document to provide feedback.

Project Ideal Solutions and How I Addressed These:

Competition Request:
"Ideally, the solution should integrate with existing software like word processors, slides, and spreadsheets."

My Response:
The tool outputs in text format, allowing users to easily incorporate it into their workflows with word processors and other software.

Competition Request:
"Help users rewrite content in plain language."

My Response:
I thoroughly studied the Plain Language guidelines, its history, and key principles. This web app effectively helps users rewrite content to meet plain language standards.

Competition Request:
"Suggest inclusive language options."

My Response:
The LLAMA agents focus on different aspects of the document: style, word choice, and structure, which addresses inclusivity and language choice effectively.

Competition Request:
"Correct style, grammar, and punctuation errors."

My Response:
This is covered by the prompt provided to users. Embedding it within the system prompt could cause errors, so I left it as a separate user instruction.

Competition Request:
"Provide feedback to users so they can see document improvements and learn for next time."

My Response:
The web app provides highly detailed feedback, offering users insight into how their document was improved.


Evidence of Work

Video

Homepage

Team DataSets

Plain Language

Description of Use Style guide for writing clearly.

Data Set

Challenge Entries

Use AI to transform bureaucratic jargon into plain English

How can we use AI to create clear, accurate and user-friendly government content? Specifically, how can we use AI tools to apply Australian Government Style Manual (Style Manual) rules and guidelines to create, edit and review content? Content that is clear, accurate and understandable helps people make informed decisions and comply with their obligations.

Go to Challenge | 23 teams have entered this challenge.