Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
leichj 0f00e67359 | 1 year ago | |
---|---|---|
assets | 1 year ago | |
demo | 1 year ago | |
maps/map1 | 1 year ago | |
my-submission | 1 year ago | |
submission-runtime | 1 year ago | |
Dockerfile | 1 year ago | |
Makefile | 1 year ago | |
README.md | 1 year ago | |
aicrowd.json | 1 year ago | |
requirements.txt | 1 year ago | |
requirements_tool.txt | 1 year ago | |
tool.py | 1 year ago |
Team: RAYTECH (11th)
The Neural MMO Challenge is an opportunity for researchers and machine learning enthusiasts to test their skills by designing and building agents that can survive and thrive together in a massively multi-agent environment full of potential adversaries.
In this challenge, you will train your models locally and then upload them to AIcrowd (via git) to be evaluated.
The following is a high level description of how this process works.
Clone this starter kit repository.
git clone http://gitlab.aicrowd.com/neural-mmo/neurips2022-nmmo-starter-kit.git
conda create -n neurips2022-nmmo python==3.9
conda activate neurips2022-nmmo
cd ./neurips2022-nmmo-starter-kit
Install necessary dependencies, git-lfs
is for submitting large files by git (use brew
for mac users) and neurips2022nmmo
is a Neural MMO environment wrapper prepared for NeurIPS 2022 Competition.
apt install git-lfs
pip install git+http://gitlab.aicrowd.com/neural-mmo/neurips2022-nmmo.git
pip install -r requirements_tool.txt
This competition is based on NeuralMMO v1.6 version.
For more information about NeuralMMO v1.6, please refer to https://github.com/NeuralMMO/environment/tree/v1.6.
python tool.py submit "my-first-submission"
See your submission on Submissions and
check your rank on Leaderboard in a few minutes.
Must put all the files and models under the my-submission/
directory. Otherwise the evaluation will fail.
- my-submission/ # Directory containing your submission.
| - other_files # All others files needed by your submission.
| - submission.py # Entrypoint of your submission.
- submission-runtime # Directory containing default Dockerfile and requirements.txt.
- Dockerfile # Dockerfile for your submission.
- requirements.txt # Python requirements for your submission.
- .submit.sh # Helper script for submit.
- tool.py # Helper script for validate locally and submit.
- aicrowd.json # Json for submit.
The default runtime is provided in submission-runtime/
. We also accept submissions with custom runtime. The configuration files include requirements.txt
and Dockerfile
.
submission.py
Here is an example of submission:
from nmmo.io import action
from neurips2022nmmo import Team
class YourTeam(Team):
def __init__(self, team_id: str, config=None, **kwargs):
super().__init__(team_id, config)
def act(self, observations):
actions = {}
if "stat" in observations:
observations.pop("stat")
for player_idx, obs in observations.items():
actions[player_idx] = {}
return actions
class Submission:
team_klass = YourTeam
init_params = {}
Note that YourTeam
class which must inherit Team
class and act
method should be implemented and we will call the function act
for evaluation. An example with random agents is provided in my-submission/submission.py
for your reference.
We provide tool.py
to validate and submit your submission.
python tool.py test
python tool.py submit <unique-submission-name>
If you can see the following output, congratulations! Now you can check your submission.
#///( )///#
//// /// ////
///// ////////// ////
/////////////////////////
/// /////////////////////// ///
///////////////////////////////////
/////////////////////////////////////
)////////////////////////////////(
///// /////
(/////// /// /// //////)
/////////// /////// //////////
(///////////////////////////////////////)
///// /////
/////////////////
///////////
However even after seeing the above output, one can still get "failed" status on online submission webpage. It is most likely that your local running environment is different from that of our competition server.
To test your repo in the exact server environment, add --startby=docker
option when testing.
python tool.py test --startby=docker
python tool.py submit <unique-submission-name> --startby=docker
We provide a variety of baseline agents, please refer to neurips2022-nmmo-baselines repository.
To do local evaluation, we provide Rollout
for easier debug, here is an example. The environment is same with the online evaluation environment in PvE Stage 1.
from neurips2022nmmo import CompetitionConfig, scripted, submission, RollOut
config = CompetitionConfig()
my_team = submission.get_team_from_submission(
submission_path="my-submission/",
team_id="MyTeam",
env_config=config,
)
# Or initialize the team directly
# my_team = MyTeam("Myteam", config, ...)
teams = [scripted.CombatTeam(f"Combat-{i}", config) for i in range(5)]
teams.extend([scripted.MixtureTeam(f"Mixture-{i}", config) for i in range(10)])
teams.append(my_team)
ro = RollOut(config, teams, parallel=True, show_progress=True)
ro.run(n_episode=1)
During evaluation, your submission will be allocated with 1 CPU core and 1G memory. Each step, your Agent Team must give decision within 600ms.
Your submission should not be larger than 500MB.
For participants in China, you can pull image from tencentcloud
python tool.py submit <unique-submission-name> --startby=docker --registry=tencentcloud
For participants using windows, we strongly recommend you to install wsl
.
Please refer to Unity viewer tutorial for an instruction about how to render locally.
Error: Pack Exceeds Maximum Allowed Size
?Try dividing your whole commit into multiple smaller commits.
No Description
Python Dockerfile Makefile Shell Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》