![]() |
![]() |
#1 |
Dec 2011
After milion nines:)
110010110002 Posts |
![]()
I do search
awk /40...../ t17_b2_k??.npg >ex.txt awk /40...../ t17_b2_k1??.npg >>ex.txt awk /40...../ t17_b2_k4??.npg >>ex.txt sort -n -k2,2 -k1,1 ex.txt > EXSORT4.txt rm ex.txt and result I got ... 21 4099999 31 4099999 181 4099999 405 4100001 401 4100002 ... 409 4999985 407 4999994 So why is my search limit "broken" but only for K from 400 - 410? |
![]() |
![]() |
![]() |
#2 |
May 2009
Moscow, Russia
24·181 Posts |
![]()
Include space to the search pattern:
awk /\ 40...../ t17_b2_k??.npg I use this construction for cut ranges from the .npg files: awk '{if (($2>4000000) && ($2<4100000)) print $0}' 5.npg Last fiddled with by unconnected on 2016-02-28 at 00:43 |
![]() |
![]() |
![]() |
#3 |
Dec 2011
After milion nines:)
110010110002 Posts |
![]()
Thanks!
Works perfectly well :) |
![]() |
![]() |
![]() |
#4 | |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
3·7·479 Posts |
![]() Quote:
Especially when you are dealing with wide ranges of values (over one power of magnitude; grep patterns are still possible but ugly). Yet another way for capturing a range like this is 'int($2/100000)==40' All of these are equivalent (note that is you are only print'ing then awk 'condition' is equivalent to awk '{if(condition){print}}' ): Code:
awk '{if (($2>=4000000) && ($2<4100000)) print $0}' 5.npg awk '$2>=4000000 && $2<4100000' 5.npg awk 'int($2/100000)==40' 5.npg # Unrelated; here is a simple trick to find squares in the first column awk 'int(sqrt($1+0.1))**2==$1' 5.npg # here +0.1 is to avoid rare sqrt return values like 719.9999, costs nothing |
|
![]() |
![]() |
![]() |
#5 |
Dec 2011
After milion nines:)
23×7×29 Posts |
![]()
My next problem is
in npg file I have data 21 4099999 31 4099999 181 4099999 I now know how to extract needed data from npeg file, but I wont next step when data is extracted, then, new ( same file) is created, and extracted data is removed example.: from above data, 31 4099999 is removed, and now npeg file contain only 21 4099999, and 181 4099999 It is easy to manually remove one or two data lines, but when you do on large scale... is it bit hard to do manually Last fiddled with by pepi37 on 2016-06-29 at 07:25 Reason: add more info |
![]() |
![]() |
![]() |
#6 |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
24×13×29 Posts |
![]()
It is possible to fiddle this sort of thing by using srfile. Assuming that the npg file is of a form supported by srfile then you can just get it to output it in npg format with -g which will be a file for each k. You can then recombine them after deleting the unwanted ks by outputting in prp format which is what you are in currently(combined npg files).
Looking back at the question this might not answer it. If necessary you can change the header to an abc header and trick it to do lots of things. |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
RSP Sieve Files for k*2^n-1 from PrimeGrid | pinhodecarlos | Riesel Prime Search | 103 | 2022-11-26 15:02 |
Archived data files... | Xyzzy | Data | 22 | 2004-05-25 03:04 |
Do you have old stored versions of data files? | GP2 | Data | 0 | 2003-11-04 20:13 |
A brief description of data files | GP2 | Data | 1 | 2003-10-13 14:31 |
Cleared exponents that never made it into data files | GP2 | Data | 14 | 2003-09-16 03:07 |