Unfortunately robots.txt only stops the well behaved scrapers. Even with disallow all, you’ll still get loads of bots. Setting up the web server to block those user agents would work a bit better, but even then there’s bots out there crawling using regular browser user agents.