-
-
Notifications
You must be signed in to change notification settings - Fork 862
"Authentication credentials were not provided" #4605
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @tutunarsl , Can you please check if your token is not expired? (You will see the expiry date from where you copy the token.) |
Hello @RishabhJain2018 , thank you for attending to the issue. My staging.eval.ai token is confirmed to be up-to-date. I retrieved it from Also verified that my github token is still valid and it does not have an expiry date. |
Interesting, looking at the error, can you please double check if the host team ID is from the staging server as well? |
I have been testing / debugging for some hours and I am confident the host team ID is matching the staging team ID. I also deleted the team and created a new one while replacing the now incremented ID. Moreover, when I knowingly enter the production server team ID or token the runner fails in a different step as expected. My next step is to try to replicate the issue on a freshly forked repository and new user on the staging server |
Can you please try this and let me know. |
Hello again @RishabhJain2018 , I could create the series of events to reproduce the issue. Below I am adding the history of the runner status, What has happened?
From now on, the user cannot create an challenge regardless of auth token, name or team ID as the runner will keep failing since it can't retrieve the user eval_ai_auth_token (my assumption) Summary
Currently, the is no way to create a new leaderboard or dataset-phase split on an existing challenge without creating a new user. I really hope I am in the wrong here and you can provide me some instructions to do this. |
Further investigation suggests, creating a new user on staging.eval.ai does not make a failing repository functional. Instead, the github repository must be changed |
Hi @tutunarsl, Thank you so much for such a detailed investigation. We'll fix this bug. I hope you are able to create the challenge now. |
Hi @RishabhJain2018 , Yes I could create however, currently I am facing similar struggles other users faced. I will follow-up on another issue on this. |
Oh..what is the issue? |
I am trying to isolate and understand different parts of the challenge hosting and submission procedure. To achieve this: I have followed the instructions and created the default "Random Number Generator Challenge" to test the submission procedure and worker / submission logs. After a submission of an arbitrary file (as this example randomly fills the outout["results"] entry, I am unable to get any feedback from the WebUI. Supposedly the file is submitted successfully assumed through the green Similar problems mentioned in here Cloud-CV/EvalAI-Starters#94 or Cloud-CV/EvalAI-Starters#102 However, despite I am waiting more than hours, I am not getting any feedback from the UI. Some possible feedbacks:
My biggest concern is that despite using the simplest possible example, this seems to be an issue. Can you confirm that staging server has similar worker capacity as the regular server? |
No, the staging server is very small. Can you please try the same thing on the production server? |
Production server runs. It seems to respond with some weird delay and has its own issues but it runs. As this information about the staging server was not available to the user (and the consequences of it being no worker allocation at all) I spent quite a bit of time on this. This seems like critical, as currently user's development time is not respected and given incentive to stay on the platform. |
Hi @tutunarsl ,
Thank you for your feedback and we will try to make this as smooth as possible for the user. |
Hi @tutunarsl , Were you able to create the challenge on EvalAI or are there any issues which I can help with for your challenge? |
After tackling the aforementioned issues and some not mentioned ones, yes, I could create an challenge. Many of the problems I have encountered have already been reported, idling as PRs or pushed under the rug as they are old issues. Some of them are:
Eval ai has big potential, however, some very basic things are questionable hard. There were many moments, I asked myself "this is waay harder than it is supposed to be, I should use another tool.". It seems community already contributes up to a degree judging by the open issues and PRs yet it seems maintenance is slow |
Hi @tutunarsl, Thank you for the feedback. We will definitely work towards making the challenge hosting experience as smooth as possible. |
To begin with: I have previously successfully created a challenge hence, I can confirm I could successfully follow the instructions.
However, I am currently encountering this an issue as a host at the time of challenge creation / update through Github.
Basically, github runner fails in the "Create of update challange" step. The same setup with same github secrets, challenge authentication team number e.g. were running a week ago.
The part of the runner logs that are possibly relevant to the issue.
Through the link this error is provided.
I would appreciate any advice you can provide to identify the issue.
The text was updated successfully, but these errors were encountered: