Cannot Divide and Assign Int64 Value

Cannot divide and assign Int64 value

(b/=1000) is a function that returns a void ('aka' () ).

You cant compare a Void and a number (by default)

You can refactor it to:

var b: Int64 = Int64(1e3)
b/=1000
let bb = ( b < 999_950)


Based on the evidence in the other answer by yourself, you are looking for byte converter. You can achieve it like:

func convertBitrateToHumanReadable(bytes: Int64) -> String { ByteCountFormatter().string(fromByteCount: bytes) }


Previously other solution:

You can implement a custom operator for this: (I'm not fan of using this. see @Alexander - Reinstate Monica comment)

infix operator /=> : MultiplicationPrecedence
precedencegroup MultiplicationPrecedence {
associativity: left
higherThan: AdditionPrecedence
}

public func /=><T: Numeric>(lhs: inout T, rhs: T) -> T{
lhs = lhs + rhs
return lhs
}

Now you can use it like:

let bb = ( b/=>1000 < 999_950)

Double value cannot be converted to Int64 because it is either infinite or NaN

Try to follow the error message...

"Double value cannot be converted to Int64 because it is either infinite or NaN" - and you say it's on the let seekTime = CMTime(value: Int64(val), timescale: 1) line...

So... since that line is using Int64(val) it's pretty clear that is where the problem is: val is either infinite or NaN.

So... val is being set to let val = Float64(sender.value) * totalsecs, and totalsecs is set by the duration let totalsecs = CMTimeGetSeconds(duration) ...

And... you say duration is NaN.

Find out why duration is NaN, and you have your answer.

Why does dividing two int not yield the right value when assigned to double?

This is because you are using the integer division version of operator/, which takes 2 ints and returns an int. In order to use the double version, which returns a double, at least one of the ints must be explicitly casted to a double.

c = a/(double)b;

InexactError: Int64 even when checking divisibility

TLDR: use ÷.

Dividing integers with the / function in Julia returns floating point numbers. Importantly, it promotes integers to floating point numbers before it does the division. For example, the number 10^16-1 is obviously divisible by three:

julia> 9999999999999999 % 3
0

However, if we try to do this division with /, we don't get the correct answer:

julia> 9999999999999999 / 3
3.3333333333333335e15

julia> using Printf

julia> @printf "%f" 9999999999999999 / 3
3333333333333333.500000

So of course, trying to store the above number in an integer array is going to throw an inexact error. But why is this? Well you're above maxintfloat(Float64). Since floating point numbers have ~15 decimal digits of precision, above this value they are no longer able to exactly represent every single integer. They start skipping (and rounding!) them. Thus you're not really dividing 10^16-1 by three, you're dividing 10^16 by three!

julia> 10^16-1
9999999999999999

julia> Float64(10^16-1)
1.0e16

Much better to use ÷ (that is, div) — not only will it handle these cases without worrying about floating point precision, but it'll also keep things as integers:

julia> 9999999999999999 ÷ 3
3333333333333333

How to change int into int64?

This is called type conversion :

i := 23
var i64 int64
i64 = int64(i)

Why can't I divide integers in swift?

The OP seems to know how the code has to look like but he is explicitly asking why it is not working the other way.

So, "explicitly" is part of the answer he is looking for: Apple writes inside the "Language Guide" in chapter "The Basics" -> "Integer and Floating-Point Conversion":

Conversions between integer and floating-point numeric types must be
made explicit

Can I cast Int64 directly into Int?

Converting an Int64 to Int by passing the Int64 value to the Int initializer will always work on a 64-bit machine, and it will crash on a 32-bit machine if the integer is outside of the range Int32.min ... Int32.max.

For safety use the init(truncatingIfNeeded:) initializer (formerly known as init(truncatingBitPattern:) in earlier Swift versions) to convert the value:

return Int(truncatingIfNeeded: rowid)

On a 64-bit machine, the truncatingIfNeeded will do nothing; you will just get an Int (which is the same size as an Int64 anyway).

On a 32-bit machine, this will throw away the top 32 bits, but it they are all zeroes, then you haven't lost any data. So as long as your value will fit into a 32-bit Int, you can do this without losing data. If your value is outside of the range Int32.min ... Int32.max, this will change the value of the Int64 into something that fits in a 32-bit Int, but it will not crash.


You can see how this works in a Playground. Since Int in a Playground is a 64-bit Int, you can explicitly use an Int32 to simulate the behavior of a 32-bit system.

let i: Int64 = 12345678901  // value bigger than maximum 32-bit Int

let j = Int32(truncatingIfNeeded: i) // j = -539,222,987
let k = Int32(i) // crash!

Update for Swift 3/4

In addition to init(truncatingIfNeeded:) which still works, Swift 3 introduces failable initializers to safely convert one integer type to another. By using init?(exactly:) you can pass one type to initialize another, and it returns nil if the initialization fails. The value returned is an optional which must be unwrapped in the usual ways.

For example:

let i: Int64 = 12345678901

if let j = Int32(exactly: i) {
print("\(j) fits into an Int32")
} else {
// the initialization returned nil
print("\(i) is too large for Int32")
}

This allows you to apply the nil coalescing operator to supply a default value if the conversion fails:

// return 0 if rowid is too big to fit into an Int on this device
return Int(exactly: rowid) ?? 0

Why does integer division in C# return an integer and not a float?

While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.

First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.

Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.

Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.



Related Topics



Leave a reply



Submit