Machine Learning CodeSignal Question /u/Imposter_89 CSCQ protests reddit

First time posting here.

I have a machine learning core framework assessment coming up and have a question to those who took it before.

I took one back in June of last year and in one implementation, although I coded the algorithm correctly, it passed a few test cases. This is because of the random number generator. Based on it, different samples will be selected for each decision tree in the bootstrap aggregating algorithm, and each tree needs a different random state to make the trees different.

I remember like passing 3 out of 10 or 20 hidden tests. So if a hidden test wanted the accuracy to be 100%, and my algorithm came out to be 99.98%, for example, it would be considered wrong.

I was very sad with this because it caused me to fail the assessment. I emailed CodeSignal the same day and told them that technically my algorithm was correct, they basically told me to kick rocks.

Anyone know what exactly I should do to help with this? A random number must be used to generate different samples and different trees. Even their own bagging algorithm on their practice codes have random number generators and I actually used it the exact same way. Still wrong.

Any suggestions?

This is one of the many reasons why I hate CodeSignal.

submitted by /u/Imposter_89
[link] [comments]

​r/cscareerquestions First time posting here. I have a machine learning core framework assessment coming up and have a question to those who took it before. I took one back in June of last year and in one implementation, although I coded the algorithm correctly, it passed a few test cases. This is because of the random number generator. Based on it, different samples will be selected for each decision tree in the bootstrap aggregating algorithm, and each tree needs a different random state to make the trees different. I remember like passing 3 out of 10 or 20 hidden tests. So if a hidden test wanted the accuracy to be 100%, and my algorithm came out to be 99.98%, for example, it would be considered wrong. I was very sad with this because it caused me to fail the assessment. I emailed CodeSignal the same day and told them that technically my algorithm was correct, they basically told me to kick rocks. Anyone know what exactly I should do to help with this? A random number must be used to generate different samples and different trees. Even their own bagging algorithm on their practice codes have random number generators and I actually used it the exact same way. Still wrong. Any suggestions? This is one of the many reasons why I hate CodeSignal. submitted by /u/Imposter_89 [link] [comments] 

First time posting here.

I have a machine learning core framework assessment coming up and have a question to those who took it before.

I took one back in June of last year and in one implementation, although I coded the algorithm correctly, it passed a few test cases. This is because of the random number generator. Based on it, different samples will be selected for each decision tree in the bootstrap aggregating algorithm, and each tree needs a different random state to make the trees different.

I remember like passing 3 out of 10 or 20 hidden tests. So if a hidden test wanted the accuracy to be 100%, and my algorithm came out to be 99.98%, for example, it would be considered wrong.

I was very sad with this because it caused me to fail the assessment. I emailed CodeSignal the same day and told them that technically my algorithm was correct, they basically told me to kick rocks.

Anyone know what exactly I should do to help with this? A random number must be used to generate different samples and different trees. Even their own bagging algorithm on their practice codes have random number generators and I actually used it the exact same way. Still wrong.

Any suggestions?

This is one of the many reasons why I hate CodeSignal.

submitted by /u/Imposter_89
[link] [comments] 

Leave a Reply

Your email address will not be published. Required fields are marked *