I disagree with the philosophy proposed by the recent post at DevOps on Windows that you should think twice before you automate a repetitive task. You should automate everything.
Humans are terrible at repetitive tasks. Even when the task is well defined and documented, after a few executions humans starts to ignore the details and one mistake in a normally safe process can cause a catastrophic failure.
I agree that quick and dirty fixes should be avoided. The project should be treated like any product focused project. Determine LOE and ROI. Set some acceptance criteria and build a task backlog to work against.
Automation isn’t just about saving time, but developing a solution that removes human error from the process. Software can be tested;it can be checked into source control along with any necessary configuration. When a process is manually executed, it’s usually done by one person who becomes the subject matter expert for that process and then the day comes that they aren’t available and someone has to test the validity of the documentation, if any even exists. Also, humans don’t scale well. Maybe it only takes an hour a week now to do a some clean up manually, but what happens when you add more servers or more data points? Even if it takes a machine the same amount of time, it’s not occupying a human resource that should be working on solving the next problem. If you ignore automation while you have the time, you’ll eventually reach a point where you spend so much time on manual processes that you don’t have time to develop automation.
A lot of the time, the complexity of developing automation exposes inefficiencies in other processes. For example, you need to clean up some data, but there are certain exceptions. Why do you store different classifications of data in the same place. Could you redirect them at the source to different folders or tables or do you even need to save that data in the first place?
DevOps is about freeing up people to develop solutions and allowing the machine to execute.