Fractions
Sometimes, using floating point arithmetic (e.g using float or double types) just doesn’t cut it. Floating point values cannot represent all values accurately, and if you start adding/subtracting/multiplying/dividing such values it is very likely the inacurracies quickly exacerbate into an unworkable mess. Depending on the domain you’re working on, different solutions can be appropriate. E.g, if you’re working with currency, you might need a type representing decimal numbers, or, if you’re working with musical timelines or scores, especially where tuplets (e.g. triplets) come into the mix, a type accurately representing any fraction may be called for. Here we will look into the latter: a type where each instance represents a fraction. We want to be able to perform basic arthmetic calculations on those numbers.
Since we’re being Swifty, but more importantly, since we’re going to represent fairly basic values, and care about (im)mutability, we’ll use a value type to represent our fractions: a struct
.
Fractions have just two properties: a denominator
–representing the value’s base unit– and a numerator
–expressing how many units the value considers–.
The result:
struct Fraction {
var numerator: Int
var denominator: Int
}
What’s next? We want to be able to instantiate instances of fractions. Since we are using Swift structs, we do not need to write an initializer, since one will be automatically synthesized for us. But we do want to do perform some basic input checking in our initializer, to ensure we end up with a reasonably safe instance. For starters, division by zero is illegal. We therefore need to check that the denominator is not zero. Secondly, since we will implement an essential function that flips negative values to their positive counterpart, we cannot allow the use of Int.min
, and have to check that parameters > Int.min
. We could create a throwing initializer, and throw an error when invalid parameter values are provided, but I think that is overkill for the task at hand, and might make the initializer less convenient at the point of use. Instead, we’ll make the initializer failable:
init?(numerator: Int, denominator: Int) {
guard denominator != 0 else { return nil }
guard numerator > Int.min, denominator > Int.min else { return nil }
self.numerator = numerator
self.denominator = denominator
}
Now, before we start implementing arithmetic operations on the type, lets get a useful utility out of the way. It is likely we will encounter occasions where we need to convert our fraction into a floating point representation:
var doubleValue: Double {
Double(numerator) / Double(denominator)
}
var floatValue: Float {
Float(doubleValue)
}
(Remember: in Swift 5.1 and later we do not have to write return
when the function body consists of a single expression.)
We also want a compact way to print our fractions:
var description: String {
"\(numerator)/\(denominator)"
}
With that out of the way, it’s time to implement our first operation: addition. We want both mutating and nonmutating variants.
mutating func add(_ other: Fraction) {
if (denominator == other.denominator) {
numerator += other.numerator
} else {
numerator = numerator * other.denominator + other.numerator * denominator
denominator = denominator * other.denominator
}
}
func adding(_ other: Fraction) -> Fraction {
var copy = self
copy.add(other, reducing: reducing)
return copy
}
Now we can create two fractions and add them together:
var f2_4 = Fraction(numerator: 2, denominator: 4)!
let f1_2 = Fraction(numerator: 1, denominator: 2)!
f2_4.add(f1_2)
f2_4.description // 8/8
Nice! The result is correct. But it is probably not the value we were looking for. It is desirable to be able to reduce fractions to their GCD (Greatest Common Denominator/Divisor). To enable us to do that, we implement a function we’ll, unsurprisingly, call reduce
:
mutating func reduce() {
var u = numerator
var v = denominator
// Euclid's solution to finding the Greatest Common Denominator
while (v != 0) {
(v, u) = (u % v, v)
}
numerator /= u
denominator /= u
}
func reduced() -> Fraction {
var copy = self
copy.reduce()
return copy
}
We change the add
functions to take a parameter indicating whether we want to reduce the result of the operation before returning it. We’ll assume we usually want to do that, so we default to true
.
mutating func add(_ other: Fraction, reducing: Bool = true) {
if (denominator == other.denominator) {
numerator += other.numerator
} else {
numerator = numerator * other.denominator + other.numerator * denominator
denominator = denominator * other.denominator
}
if ( reducing ) {
self.reduce()
}
}
Now, lets run the sample addition above again:
var f2_4 = Fraction(numerator: 2, denominator: 4)!
let f1_2 = Fraction(numerator: 1, denominator: 2)!
f2_4.add(f1_2)
f2_4.description // 1/1
Great! And if we want to prevent the reduction:
f2_4.add(f1_2, reducing: false)
f2_4.description // 8/8
There’s a problem lurking in our reduce
implementation though. You’ll discover that, if you write unit tests that exercise that function with fractions that contain negative values:
var f1 = Fraction(numerator: -3, denominator: 15)!
f1.reduce()
f1.description // 1/-5
The correct answer should be -1/5! Looks like we need to refine our reduce
function.
mutating func reduce() {
let (absNumerator, numeratorSign) = numerator < 0 ? (-numerator, -1) : (numerator, 1)
let (absDenominator, denominatorSign) = denominator < 0 ? (-denominator, -1) : (denominator, 1)
var u = absNumerator
var v = absDenominator
// Euclid's solution to finding the Greatest Common Denominator
while (v != 0) {
(v, u) = (u % v, v)
}
numerator = absNumerator / u * numeratorSign
denominator = absDenominator / u * denominatorSign
}
We take the absolute values of the numerator and denominator and perform the reduction on these. When done we re-establish the signedness of each, thus ensuring a correct result:
var f1 = Fraction(numerator: -3, denominator: 15)!
f1.reduce()
f1.description // -1/5
The implementations for subtraction, multiplication and division follow the same pattern as the implementation for addition. You can check out the actual implementations in the project’s source. We won’t discuss them here.
So, as we saw above, now we can write
var f2_4 = Fraction(numerator: 2, denominator: 4)!
let f1_2 = Fraction(numerator: 1, denominator: 2)!
f2_4.add(f1_2) // or: let result = f2_4.adding(1_2)
but wouldn’t it be nicer if we could write
let result = f2_4 + f1_2
Yes? Then let’s make that possible by overloading the operators. I think this is a domain where doing that makes sense. And, since the implementations are very compact, I’ll list them all here:
func + (lhs: Fraction, rhs: Fraction) -> Fraction {
lhs.adding(rhs)
}
func - (lhs: Fraction, rhs: Fraction) -> Fraction {
lhs.subtracting(rhs)
}
func * (lhs: Fraction, rhs: Fraction) -> Fraction {
lhs.multiplying(by: rhs)
}
func / (lhs: Fraction, rhs: Fraction) -> Fraction {
lhs.dividing(by: rhs)
}
This allows us to write:
var f2_4 = Fraction(numerator: 2, denominator: 4)!
let f1_2 = Fraction(numerator: 1, denominator: 2)!
let result = f2_4 + 1_2 // 1/1
Still missing is the ability to compare fractions, so let’s turn our attention to that: To enable comparisons we must conform our Fraction type to the Comparable protocol. This involves implementing two functions: ==
and <
. The implementations are fairly standard and simple, but there is one gotcha: Fractions with minus signs in different positions may still be equal. For instance, 1/-4
represents the same value as -1/4
; and -1/-4
is the same as 1/4
. To account for that we need to ‘normalize’ these fractions to facilitate the implementation of the comparison functions. The rules for normalization are provided as documentation to the normalization methods below:
extension Fraction: Comparable {
/// Fractions with two negative signs are normalized to two positive signs.
/// Fractions with negative denominator are normalized to positive denominator and negative numerator.
mutating func normalize() {
if numerator >= 0 && denominator >= 0 { return }
if (numerator < 0 && denominator < 0) || (denominator < 0) {
numerator *= -1
denominator *= -1
}
}
/// Fractions with two negative signs are normalized to two positive signs.
/// Fractions with negative denominator are normalized to positive denominator and negative numerator.
func normalized() -> Fraction {
var copy = self
copy.normalize()
return copy
}
With these functions in place, we can safely implement the comparison functions.
static func == (lhs: Fraction, rhs: Fraction) -> Bool {
let lhsRed = lhs.reduced().normalized()
let rhsRed = rhs.reduced().normalized()
return lhsRed.numerator == rhsRed.numerator && lhsRed.denominator == rhsRed.denominator
}
static func < (lhs: Fraction, rhs: Fraction) -> Bool {
let lhsRed = lhs.reduced().normalized()
let rhsRed = rhs.reduced().normalized()
let lhsNominatorProduct = lhsRed.numerator * rhsRed.denominator
let rhsNominatorProduct = rhsRed.numerator * lhsRed.denominator
return lhsNominatorProduct < rhsNominatorProduct
}
There’s only one issue left now. If we operate on fractions with a negative denominator, the results may surprise us. Which is a euphemistic way of saying that the results may well be incorrect. As the code currently stand, the following unit test will fail:
var f1_m4 = Fraction(numerator: 1, denominator: -4)!
let f2_4 = Fraction(numerator: 2, denominator: 4)!
f1_m4.add(f2_4)
XCTAssertTrue(f1_m4.numerator == 1 && f1_m4.denominator == 4, "Incorrect result adding \(f2_4.description) to 1/-4. Expected 1/4, got \(f1_m4.description)")
The solution to getting the test to pass is to normalize the Fractions before operating on them, as we did in the implementation providing the Comparable
protocol conformance. So now, the implementation of add
becomes:
mutating func add(_ other: Fraction, reducing: Bool = true) {
self.normalize()
let normalizedOther = other.normalized()
if (denominator == normalizedOther.denominator) {
numerator += normalizedOther.numerator
} else {
numerator = numerator * normalizedOther.denominator + normalizedOther.numerator * denominator
denominator = denominator * normalizedOther.denominator
}
if ( reducing ) {
self.reduce()
}
}
And that’s it. We now have a basic implementation of a type that allows us to easily perform calculations on fractions, without loss of precision.
You can find the full source code, including implementations for the remainder of the arithmetic operators, and unit tests, here.