Congratulations!
to all the winners of Phase I
Below is the first list of shortlisted entries based on the scores
from your codebase, deck and video.
List I
Track: General |
||
Team |
Country |
Rank |
torchFlow | ![]() |
1 |
Sagar | ![]() |
2 |
691825-U8T8P3A8 | ![]() |
3 |
TeamAlphaRiver | ![]() |
4 |
Astrics | ![]() |
5 |
Atlantic Warriors | ![]() |
6 |
Binary Bugs | ![]() |
7 |
Syntax Slayers | ![]() |
8 |
Nature Saviour | ![]() |
9 |
International B | ![]() |
10 |
The Civil-AI | ![]() |
11 |
Lucis_Tech | ![]() |
12 |
Techiee Hackers | ![]() |
13 |
Gennex | ![]() |
14 |
future-coders | ![]() |
15 |
Remote Coders | ![]() |
16 |
Aquamarine | ![]() |
17 |
Vision Giants | ![]() |
18 |
Cyberon | ![]() |
19 |
Bluescan | ![]() |
20 |
Track: Kyndryl and REVA University |
||
Kiernans | ![]() |
1 |
Team_Solo_GSM | ![]() |
2 |
Code Craft RACErs | ![]() |
3 |
AquaGuardians | ![]() |
4 |
Code Warriors | ![]() |
5 |
Entropy4Change | ![]() |
6 |
Weather-eye | ![]() |
7 |
Nuoc Tot | ![]() |
8 |
Paladin | ![]() |
9 |
Hope | ![]() |
10 |
List II
The teams in List II have not submitted all the artefacts as per hackathon guidelines. However, we found the solutions technically sound and hence have been shortlisted to move to Phase II. They must submit the improved codebase, deck, MVP etc., as per the guidelines in Phase II or else their submissions will be disqualified.
Track: General |
||
Team |
Country |
Rank |
Infinity | ![]() |
21 |
DS and Chill | ![]() |
22 |
AI-Avengers | ![]() |
23 |
Eclipse | ![]() |
24 |
Avani | ![]() |
25 |
Track: Kyndryl and REVA University |
||
AI Mafia | ![]() |
11 |
Phase I – Evaluation Criterion:
Submission of All Documents in the Proper Format
We are pleased to inform you that as a first evaluation criterion, submissions which were as per the required format has been given the first preference. Your attention to detail and compliance with the formatting guidelines are highly appreciated.
Evaluation of result.csv File on Predict Dataset
Additionally, we have completed the evaluation of the result.csv file provided by each participant on the predicted dataset. Our evaluators meticulously examined the contents of the result.csv files to assess the results.
ONNX Model Evaluation and Golden Dataset Results
For participants who have submitted ONNX files, we have also evaluated your models. We ran your ONNX models on the golden dataset (unknown dataset) to assess their performance and accuracy. Our team of experts used a comprehensive set of metrics to gauge the effectiveness of your solutions.
Submissions not in the prescribed format
We have reviewed your submissions that were not in the prescribed formats and attempted to run your codes. However, since the dependencies are not clear, we had some success in select cases. These were also considered and ranked. The rest of the submissions were not considered for further evaluations.
Still have queries?
Contact Us on Discord, a dedicated channel to communicate with other participants, mentors and AI experts.
You can also reach out to us at [email protected] or +91 89040 58866