让我们考虑经典的大O表示法定义(证明链接):
Let's consider classic big O notation definition (proof link):
O(f(n))是所有函数的集合,因此对于所有n ≥ n_0,都存在带|g(n)| ≤ C * f(n)的正常量C和n0.
O(f(n)) is the set of all functions such that there exist positive constants C and n0 with |g(n)| ≤ C * f(n), for all n ≥ n_0.
根据此定义,执行以下操作是合法的(g1和g2是描述两种算法复杂度的函数):
According to this definition it is legal to do the following (g1 and g2 are the functions that describe two algorithms complexity):
g1(n) = 9999 * n^2 + n ∈ O(9999 * n^2) g2(n) = 5 * n^2 + N ∈ O(5 * n^2)同时注意以下功能也是合法的:
And it is also legal to note functions as:
g1(n) = 9999 * N^2 + N ∈ O(n^2) g2(n) = 5 * N^2 + N ∈ O(n^2)如您所见,第一个变体O(9999*N^2)与(5*N^2)更加精确,使我们清楚地知道哪种算法更快.第二个没有显示任何内容.
As you can see the first variant O(9999*N^2) vs (5*N^2) is much more precise and gives us clear vision which algorithm is faster. The second one does not show us anything.
问题是:为什么没人使用第一个变体?
The question is: why nobody use the first variant?
推荐答案从一开始,O()表示法的使用与精确地"表示相反.这个想法是要掩盖算法之间的精确"差异,并能够忽略计算硬件细节的影响以及编译器或编程语言的选择.实际上,g_1(n)和g_2(n)都属于n的同一类(或一组),即O(n^2)类.它们在细节上有所不同,但是足够相似.
The use of the O() notation is, from the get go, the opposite of noting something "precisely". The very idea is to mask "precise" differences between algorithms, as well as being able to ignore the effect of computing hardware specifics and the choice of compiler or programming language. Indeed, g_1(n) and g_2(n) are both in the same class (or set) of functions of n - the class O(n^2). They differ in specifics, but they are similar enough.
这是一堂课,这就是为什么我编辑您的问题并将其表示法从= O(9999 * N^2)纠正为∈ O(9999 * N^2)的原因.
The fact that it's a class is why I edited your question and corrected the notation from = O(9999 * N^2) to ∈ O(9999 * N^2).
顺便说一句-我相信您的问题会更适合 cs.stackexchange .
By the way - I believe your question would have been a better fit on cs.stackexchange.
更多推荐
为什么我们不想在Big
发布评论