Skip to content

Commit b798d66

Browse files
committed
etc
1 parent 7eb8083 commit b798d66

File tree

3 files changed

+207
-0
lines changed

3 files changed

+207
-0
lines changed

drafts/late_ff_test.n

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
comparing underscores memoizer to a case of 500 results, on firefox 2021
2+
3+
Testing: warmup benchmarks
4+
Code: Math.sqrt(i) ( squareroot )
5+
136.7323 Kfunc/s avg : 7455.619881760318
6+
154.4427 Kfunc/s avg : 7455.619881755367
7+
Code: r+=i ( fastest dummy test )
8+
494.8802 Kfunc/s avg : 125002.5
9+
374.3408 Kfunc/s avg : 125002.5
10+
Avg op/s 290099.00691537646
11+
mutil.js:282:13
12+
Testing: number, sin, mixed hit:500
13+
Code: r+=Math.sin(number[i]) ( Math.sin )
14+
28.2365 Kfunc/s avg : 3.920301118039033e-13
15+
33.6509 Kfunc/s avg : 3.92030111803885e-13
16+
Code: r+=ensin_0(number[i]) ( ensin_0 )
17+
1.2053 Kfunc/s avg : 3.92030111804194e-13
18+
1.7656 Kfunc/s avg : 3.92030111804194e-13
19+
Code: r+=ensin__(number[i]) ( ensin__ )
20+
1.5564 Kfunc/s avg : 3.92030111804194e-13
21+
1.3629 Kfunc/s avg : 3.92030111804194e-13
22+
Code: r+=ensin:500(number[i]) ( ensin:500 )
23+
1.5821 Kfunc/s avg : 3.92030111804194e-13
24+
1.0524 Kfunc/s avg : 3.9203011180419395e-13
25+
Avg op/s 8801.531209662862
26+
27+
Testing: number, multisin, mixed hit:500
28+
Code: r+=Multi.sin(number[i]) ( Multi.sin )
29+
4.6483 Kfunc/s avg : -76.39881759946037
30+
2.6075 Kfunc/s avg : -76.39881759946057
31+
Code: r+=enmulti_0(number[i]) ( enmulti_0 )
32+
872.2899 func/s avg : -76.39881759946137
33+
942.9967 func/s avg : -76.39881759946131
34+
Code: r+=enmulti__(number[i]) ( enmulti__ )
35+
710.5950 func/s avg : -76.3988175994616
36+
901.1175 func/s avg : -76.39881759946137
37+
Code: r+=enmulti:500(number[i]) ( enmulti:500 )
38+
885.4073 func/s avg : -76.39881759946148
39+
1.8303 Kfunc/s avg : -76.39881759946084
40+
Avg op/s 1674.8084256709367

drafts/nosering.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
Nose Ring Buffer - for quick collection of most frequent values.
2+
3+
When there is a need to discover the most frequent values in a set, the instinct is to tally a frequency count for each value present and then sort the resulting set of value:tallie pairs - to get the most frequent values at the head of the sorted pairs, and least frequent at the tail.
4+
5+
Building a set of value:tallie pairs and sorting it is not a featherlight process. A potentially lighter way to do this is to grow a list of values read out of the set, and sort this list while it is growing. This can be an O(n) process, the list of sorted values can be lighter to create than a list of value:tallie pairs.
6+
7+
A 'Nose Ring Buffer' capable of this can be implemented with a single fixed array and a few pointers.
8+
It begins as a familiar cyclic ring buffer, and grows a 'nose' at its head where repeat values can be bubble sorted as they hit previous occurences in the array.
9+
10+
function sniff(vals)
11+
{
12+
buff = new Array(vals.length)
13+
14+
ring_max = vals.length
15+
ring_fills = 0
16+
nose_max = Math.floor(ring_max/2)
17+
last_in_ring = ring_max
18+
nose_tip = 0
19+
20+
top_tally = 0
21+
22+
for( j=0 ; j<vals.length ; j++ ){
23+
cv=vals[j]
24+
25+
if(nrbuff[0]===cv){ top_tally++ }
26+
else{
27+
for(var i=1; i<nrbuff.length ; i++){
28+
if(nrbuff[i]===cv) {
29+
30+
if(i<=nose_tip){ dest = i-1 }else{ dest = nose_tip }
31+
32+
if(i===1&&top_tally>1){
33+
top_tally--
34+
}else{
35+
nrbuff[i] = nrbuff[dest] //evict an element down
36+
nrbuff[dest] = cv //swap hit element up
37+
}
38+
}else{
39+
//not found, so write to ring
40+
if(last_in_ring === ring_fills)
41+
{
42+
if(ring_fills===ring_max){
43+
last_in_ring = nose_tip
44+
}else{
45+
ring_fills++
46+
}
47+
}
48+
nrbuff[++ringpin]=cv
49+
}
50+
}
51+
}
52+
}
53+
}

drafts/slate.n

Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
#fencache design notes
2+
------------
3+
* the key val storage option could perform better with equal buted inputs
4+
but is genrly hindered because arg/keys have to be stringified to be stowed
5+
and checked.
6+
a recent value buffering stage could improve the store
7+
8+
* the double ring buffer storage could turn itself off on long runs of misses
9+
and resume after a resting count
10+
or could reduce its search and not write...
11+
basically a gism to giveup on repeat misses, and forget the tail or whole queue.
12+
since excessive misses entail excessive time consumed.
13+
little extra time may need added to the general case
14+
in order to count and react to successive excessive misses.
15+
eg to 20 misses of 20 entries.
16+
but the optimisations could choke general performance on problem patterns
17+
however they may greatly smooth reaction to the more common problem pattern
18+
of frequent cache miss runs.
19+
20+
gism 1
21+
after a number of long cache misses and writes,
22+
when rx gets to middle, split the cache and fill again
23+
as the low split of cache should be sorted infrequent anyway
24+
little point in keeping checking it
25+
gism 2
26+
when miss rx!==split point, count it and write res
27+
when miss rx==split point, and count high, and if count then refill again
28+
29+
problem cache strat is different for slow
30+
31+
//----------------
32+
33+
34+
//----------
35+
36+
# fencache
37+
38+
Performance notes and data
39+
--------------------------
40+
41+
Example benchmark scores:
42+
43+
Test of worst case (all misses) with random unique inputs
44+
45+
fencached perfomance:
46+
```
47+
Math.sin src performance, 30 Million func/s
48+
49+
function csize speed % (of src)
50+
enSin 1 60.0
51+
enSin 2 50.0
52+
enSin 3 15.0
53+
enSin 4 6.0
54+
ensin 50 0.8
55+
ensin 200 0.2
56+
```
57+
58+
Test best case with 1 repeated input:
59+
```
60+
Math.sin, 30 Million func/s
61+
A multi.trig function, 1.7 Million func/s
62+
63+
function csize speed %
64+
enSin 1 220
65+
enMTrig 1 3800
66+
```
67+
68+
Test on 90 normally distributed distinct values
69+
```
70+
Multi.trig function : 1.7 Mfunc/s
71+
72+
function csize speed %
73+
enTrig 1 95
74+
enTrig 2 95
75+
enTrig 3 90
76+
enTrig 12 110
77+
enTrig 60 220
78+
enTrig 100 310
79+
enTrigx 0 840
80+
```
81+
82+
Test a string processing functions on a selection of random length
83+
strings of upto 300 chars each, 200 of them repeated 4 times mixed up
84+
with 200 non repeated unique strings
85+
```
86+
fast string process function : 130 Million func/s
87+
slow string process function : 5.9 Thousand func/s
88+
89+
function csize speed %
90+
faststr 1 25.0
91+
faststr 3 3.0
92+
faststr 10 1.5
93+
faststr 100 0.3
94+
faststr 200 0.13
95+
faststr 400 0.2
96+
faststrx 0 8
97+
98+
slowstr 1 100
99+
slowstr 3 100
100+
slowstr 10 105
101+
slowstr 100 160
102+
slowstr 200 400
103+
slowstr 400 4600
104+
slowstrx 0 190000 (%!)
105+
```
106+
107+
Writing memory can often be a comparatively heavy nano operation, even when we are just duplicating references. Since memoizing necessitates extra memory writes as well as lookups, memoizing calculation results
108+
109+
is only performant when there is
110+
some degree of repetition of parameters because keeping a larger cache size
111+
entails having more entries to check. Each cache search takes O(size) time
112+
for every miss. Fencache optimises lightly by floating results toward the head
113+
of the list each time they are recalled. This greatly suits unevenly distributed
114+
parameter sets and only marginally slows worst case.

0 commit comments

Comments
 (0)