The shorter the feedback cycle between detecting issues with your code, the more productive both engineers and teams will be. While getting feedback on your code at code review is great - would it not be even better to get instant feedback, even before you submit your code to code review?
Productive teams put advanced code checking infrastructure in place early on: exactly to help with rapid feedback on “easy to spot” code quality issues. Linting and static analysis are the two most common approaches: often used together.
Code formatting is one of the most common use cases of code quality checks. The code formatter would be run before creating a pull request, ensuring that all code up for review follows the style guide that the team agreed on and defined in the formatter. Popular code formatters include SwiftFormat or Google-java-format.
SonarSource static analysis tools support Swift, Objective-C, Kotlin & Java along with several additional languages. Tight integration with GitHub, GitLab, Azure DevOps & Bitbucket means easy adoption within your team workflow regardless of where you keep your code. Used by more than 200,000 engineering teams - start analyzing your code quality and code security today:
Linting is a special case of static analysis: scanning the code for potential errors, beyond just code formatting. This can be as trivial checks as ensuring indentation is correct, through enforcing naming patterns, all the way to more advanced rules like declaring variables in alphabetical order. Popular linting tools include:
- iOS: the Clang analyzer - shipping with Xcode - and SwiftLint
- Android: the lint tool - shipping with Android Studio - and ktlint for Kotlin.
As the team grows, it can make sense to start enforcing more complex rules across the codebase. These rules could be enforcing team-wide coding patterns - like restricting forced values - or enforcing architecture “rules” - like a View not being allowed to invoke Interactors directly.
At Uber, we’ve seen lots of value in adding architecture definitions as “lint enforceable” rules. To do so, the team built and open sourced NEAL (Not Exactly A Linter) for more advanced pattern detection, used across iOS and Android.
Lint fatigue is a problem that starts to occur at large projects or ones with many linting rules. As the errors or warnings pile up, engineers often start to ignore them. A good example is ignoring deprecation warnings when it’s not clear how to migrate to a new implementation of an API.
A common way of dealing with lint fatigue is to make linting errors break the build - leaving choice but to fix them. A bit annoying, but effective. Another approach is to build tools to fix linting errors automatically. This is an approach Instagram took: they used automated refactoring to educate engineers about coding best practices.
Static Analysis #
Static analysis is the more generic phrase of automatic inspection of the code, looking for potential issues and errors. Mobile static analysis tools usually help detect use cases that are more complex than what a simple lint rule could catch.
Most static analysis tools are written for a language - Java, Kotlin, Objective C, or Swift - and detect common programming issues like unused variables, empty catch blocks, possible null values, and others. On top of the linting tools listed above, static analysis tools you could consider are:
- Multiple languages: SonarQube and SonarCloud (advanced static analysis), Infer (Java, Objective-C), Cdebeat, NEAL, Whispers (scanning for hardcoded credentials)
- Swift: Clang analyzer (ships with Xcode), SwiftLint, SwiftInfo, Tailor, SwiftFormat
- Kotlin: ktlint (a “no-decision” linter), detekt (code smells and complexity reports)
- Objective-C: Clang analyzer (ships with Xcode), OCLint, Faux Pas,
- Java: lint (ships with Android Studio), NullAway (annotation-based null-checks), FlowDroid (data flow analysis), CogniCrypt (secure cryptography integration checks) PMD (Programming Mistake Detector), Checkstyle See also this repository of static analysis tools per language
The upside of using linting and static analysis tools is getting more rapid feedback and code reviewers not needing to check for the “common” code issues. Code quality generally stays higher - by the tools enforcing rules. When using advanced tooling, static analysis can result in more stable and secure apps by detecting edge cases ahead of time. A good example of added stability is using a tool to prevent runtime crashes due to null objects - doing this by analyzing the code, compile-time.
The downside of these tools is the time it takes to integrate them and the additional maintenance they bring. You need to decide which tool to use and add them to your build setup - both to local builds and the CI/CD setup. Once in place, you might need to keep the rules up to date, and every now and then, update the version of the tool to support new features you might need.
The more complex tooling you choose, the more this maintenance might add up to. At Uber, we set up extensive linting and static analytic checks. The outcome felt to me like it was worth the added effort. However, I would be hesitant to build the type of custom tooling we did - and instead, use a good enough tool for the job that can be set up with little effort.
Building Mobile Apps at Scale
"An essential read for anyone working with mobile apps. Not just for mobile engineers - but also on the backend or web teams. The book is full of insights coming from someone who has done engineering at scale."
- Ruj Sabya, formerly Sr Engineering Manager @ Flipkart