Winners are chosen based on a part of the test dataset which the user never gets feedback on/never is able to see the true values of during the competition round - so it's very difficult to overfit to that private set.
They are - your rank on the leaderboard is the importance of your model in their 'meta-model' so submitting the same model as someone else would just net you 0.
The thing is, with the data no one knows what each row represents, or even what the features are or what they're predicting is. Each submission has 30,000 predictions, so you would need to have an unreasonably good random guess to get anywhere near the top of the leaderboard.