There has been little rigorous investigation of the transferability of existing empirical water clarity models developed at one location or time to other lakes and dates of imagery with differing conditions. Machine learning methods have not been widely adopted for analysis of lake optical properties such as water clarity, despite their successful use in many other applications of environmental remote sensing. This study compares model performance for a random forest (RF) machine learning algorithm and a simple 4-band linear model with 13 previously published empirical non-machine learning algorithms. We use Landsat surface reflectance product data aligned with spatially and temporally co-located in situ Secchi depth observations from northeastern USA lakes over a 34-year period in this analysis. To evaluate the transferability of models across space and time, we compare model fit using the complete dataset (all images and samples) to a single-date approach, in which separate models are developed for each date of Landsat imagery with more than 75 field samples. On average, the single-date models for all algorithms had lower mean absolute errors (MAE) and root mean squared errors (RMSE) than the models fit to the complete dataset. The RF model had the highest pseudo-R2 for the single-date approach as well as the complete dataset, suggesting that an RF approach outperforms traditional linear regression-based algorithms when modeling lake water clarity using satellite imagery.