C25 Agentic AI Challenge: Judging Criteria

The judging process is designed to evaluate different aspects of the AI solutions, including speed, technical build, automation, and output quality.

Participants will be assessed across four dimensions:

I. Technical Performance (35%)

  • Speed & Efficiency (10%) – How quickly and smoothly does the agent complete tasks compared to the previous workflow?

  • Resource Optimization (10%) – Was the solution built with minimal lag, clean logic, and efficient use of resources?

  • Scalability & Cost-Effectiveness (15%) – Can the agent reduce manual effort and scale to real-world marketing use cases?

 

II. Task Execution & Automation (35%)

  • Completeness of Tasks (20%) – How thoroughly does the agent address all aspects of the challenge?

  • Level of Automation (15%) – How much of the process is fully automated? Does the agent run autonomously?

 

III. Output Quality (20%)

  • Accuracy & Relevance (10%) – Is the AI-generated output factually accurate, relevant, and on-brand?

  • Quality of Output (10%) – Is the content or analysis clear, creative, and professionally structured?

 

IV. User Experience & Presentation (10%)

  • User Experience (5%) – Is the solution intuitive and easy to use?

  • Presentation (5%) – Was the agent explained clearly? Was the process and logic easy to follow?


Note: Any discrete matters are subject to IAB HK's final decision.



>>>Agentic AI Challenge Overview