Skip to content

"Authentication credentials were not provided" #4605

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tutunarsl opened this issue Apr 19, 2025 · 17 comments
Open

"Authentication credentials were not provided" #4605

tutunarsl opened this issue Apr 19, 2025 · 17 comments

Comments

@tutunarsl
Copy link

tutunarsl commented Apr 19, 2025

To begin with: I have previously successfully created a challenge hence, I can confirm I could successfully follow the instructions.

However, I am currently encountering this an issue as a host at the time of challenge creation / update through Github.

Basically, github runner fails in the "Create of update challange" step. The same setup with same github secrets, challenge authentication team number e.g. were running a week ago.

The part of the runner logs that are possibly relevant to the issue.

Run python3 github/challenge_processing_script.py

Following errors occurred while validating the challenge config: 403 Client Error: Forbidden for url: https://staging.eval.ai/api/challenges/challenge/challenge_host_team/xxx/create_or_update_github_challenge/
There was an error while creating an issue: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/repos#get-a-repository", "status": "404"}

Exiting the challenge_processing_script.py script after failure

Error: Process completed with exit code 1.

Through the link this error is provided.

{"detail":"Authentication credentials were not provided."}

I would appreciate any advice you can provide to identify the issue.

@RishabhJain2018
Copy link
Member

RishabhJain2018 commented Apr 19, 2025

Hey @tutunarsl , Can you please check if your token is not expired? (You will see the expiry date from where you copy the token.)

@tutunarsl
Copy link
Author

Hello @RishabhJain2018 , thank you for attending to the issue.

My staging.eval.ai token is confirmed to be up-to-date. I retrieved it from https://staging.eval.ai/web/profile/auth-token and It will last until 2026.

Also verified that my github token is still valid and it does not have an expiry date.

@RishabhJain2018
Copy link
Member

RishabhJain2018 commented Apr 19, 2025

Interesting, looking at the error, can you please double check if the host team ID is from the staging server as well?

@tutunarsl
Copy link
Author

I have been testing / debugging for some hours and I am confident the host team ID is matching the staging team ID. I also deleted the team and created a new one while replacing the now incremented ID.

Moreover, when I knowingly enter the production server team ID or token the runner fails in a different step as expected.

My next step is to try to replicate the issue on a freshly forked repository and new user on the staging server

@RishabhJain2018
Copy link
Member

My next step is to try to replicate the issue on a freshly forked repository and new user on the staging server

Can you please try this and let me know.

@tutunarsl
Copy link
Author

Hello again @RishabhJain2018 , I could create the series of events to reproduce the issue. Below I am adding the history of the runner status,

Image

What has happened?

  1. Forked the repository and AUTH_TOKEN has been set.
  2. Runners activated and a branch "challenge" has been created with eval_ai_token, team id and the server. All are retrieved from the staging server.
  3. User pushes the changes, the runner starts, succeeds, the challenge successfully deployed on the staging server waiting for submission for approval. (commit a3b3240)
  4. User decides the update the challenge name. User commits the changes that has been applied to challenge_config.yaml and pushes. The runner succeeds, the user can see the updated challenge on the staging server (commit cbd5f8a)
  5. User decides to change more importing settings about the challenge such as add a new leaderboard or a new dataset-phase split. Since it is not documented, the user doesn't know that this will fail and pushes these changes. The runner fails and provides the feedback that this is not possible. (commit 9440476)
 Following errors occurred while validating the challenge config:
ERROR: The leaderboard with ID: 2 doesn't exist. Addition of a new leaderboard after challenge creation is not allowed.
ERROR: Challenge phase split (leaderboard_id: 2, challenge_phase_id: 2, dataset_split_id: 2) doesn't exist. Addition of challenge phase split after challenge creation is not allowed.
There was an error while creating an issue: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/repos#get-a-repository", "status": "404"}

Exiting the challenge_processing_script.py script after failure
  1. The user reverts the changes and does a minor change to the challenge name. The runner succeeds, the changes are shown in the staging server (commit 3d9db8a)
  2. The user wants to update important settings such as leaderboard and phase-dataset splits. As the error indicated these cannot be changed after challenge creation, the user decides to delete the challenge through the WebUI (small red trash box next to the challenge name). The user deletes the challenge, the challenge is no longer visible through https://staging.eval.ai/web/hosted-challenges. The user changes the challenge title before applying leaderboard or dataset split changes. The user pushes, the runner succeeds however, the challenge do not appear on the staging server under hosted-challenges. (commit 4094896)
  3. The user is confused, thinks maybe the challenge host team also needs to be deleted. Deletes the challenge host team, re-creates one (Team ID changes). The user updates the configuration according to the new team id and triggers the runner. (commit bc9daa5). The runner fails with the exact same error reported in this issue:

1s
Run python3 github/challenge_processing_script.py

Following errors occurred while validating the challenge config: 403 Client Error: Forbidden for url: https://staging.eval.ai/api/challenges/challenge/challenge_host_team/357/create_or_update_github_challenge/
There was an error while creating an issue: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/repos#get-a-repository", "status": "404"}

Exiting the challenge_processing_script.py script after failure

Error: Process completed with exit code 1.

From now on, the user cannot create an challenge regardless of auth token, name or team ID as the runner will keep failing since it can't retrieve the user eval_ai_auth_token (my assumption)

Summary

  • Possibly, challenge deletion through the staging server deletes some relation between the user and the sever. Preventing future challenge creation and update.
  • There is virtually no documentation on how to update an existing challenge with more severe changes. It is a quite a big assumption that the host will know exactly the splits, leaderboards e.g. they need. Isn't this the purpose of the staging server, where the development of the challenge can happen.

Currently, the is no way to create a new leaderboard or dataset-phase split on an existing challenge without creating a new user. I really hope I am in the wrong here and you can provide me some instructions to do this.

@tutunarsl
Copy link
Author

Further investigation suggests, creating a new user on staging.eval.ai does not make a failing repository functional. Instead, the github repository must be changed

@RishabhJain2018
Copy link
Member

Hi @tutunarsl, Thank you so much for such a detailed investigation. We'll fix this bug. I hope you are able to create the challenge now.

@tutunarsl
Copy link
Author

Hi @RishabhJain2018 , Yes I could create however, currently I am facing similar struggles other users faced. I will follow-up on another issue on this.

@RishabhJain2018
Copy link
Member

Oh..what is the issue?

@tutunarsl
Copy link
Author

I am trying to isolate and understand different parts of the challenge hosting and submission procedure. To achieve this:

I have followed the instructions and created the default "Random Number Generator Challenge" to test the submission procedure and worker / submission logs.

After a submission of an arbitrary file (as this example randomly fills the outout["results"] entry, I am unable to get any feedback from the WebUI. Supposedly the file is submitted successfully assumed through the green submitted status.

Similar problems mentioned in here Cloud-CV/EvalAI-Starters#94 or Cloud-CV/EvalAI-Starters#102

However, despite I am waiting more than hours, I am not getting any feedback from the UI.

Some possible feedbacks:

  • Validation that worker is getting initialized and it is not about the user but the server
  • Estimated time of completion
  • Early check on submission file validity?

My biggest concern is that despite using the simplest possible example, this seems to be an issue.

Can you confirm that staging server has similar worker capacity as the regular server?

@RishabhJain2018
Copy link
Member

Can you confirm that staging server has similar worker capacity as the regular server?

No, the staging server is very small. Can you please try the same thing on the production server?

@tutunarsl
Copy link
Author

tutunarsl commented Apr 21, 2025

Can you confirm that staging server has similar worker capacity as the regular server?

No, the staging server is very small. Can you please try the same thing on the production server?

Production server runs. It seems to respond with some weird delay and has its own issues but it runs.

As this information about the staging server was not available to the user (and the consequences of it being no worker allocation at all) I spent quite a bit of time on this.

This seems like critical, as currently user's development time is not respected and given incentive to stay on the platform.

@RishabhJain2018
Copy link
Member

Hi @tutunarsl ,

As this information about the staging server was not available to the user (and the consequences of it being no worker allocation at all) I spent quite a bit of time on this.
This seems like critical, as currently user's development time is not respected and given incentive to stay on the platform.

Thank you for your feedback and we will try to make this as smooth as possible for the user.

@RishabhJain2018
Copy link
Member

Hi @tutunarsl , Were you able to create the challenge on EvalAI or are there any issues which I can help with for your challenge?

@tutunarsl
Copy link
Author

After tackling the aforementioned issues and some not mentioned ones, yes, I could create an challenge. Many of the problems I have encountered have already been reported, idling as PRs or pushed under the rug as they are old issues.

Some of them are:

  • Old worker python version
  • Missing local development guide
  • Missing consecutive submission limit warning to user
  • Unable to change challenge configuration
  • Limited support for Challenge phases with multiple splits under one banner
  • Old documentation (challenge creation through zip)
  • That there is no challenge search functionality within the production server

Eval ai has big potential, however, some very basic things are questionable hard. There were many moments, I asked myself "this is waay harder than it is supposed to be, I should use another tool.". It seems community already contributes up to a degree judging by the open issues and PRs yet it seems maintenance is slow

@RishabhJain2018
Copy link
Member

Hi @tutunarsl, Thank you for the feedback. We will definitely work towards making the challenge hosting experience as smooth as possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants