mersenneforum.org Apple Moving to ARM CPUs?
 Register FAQ Search Today's Posts Mark Forums Read

 2018-04-04, 20:33 #12 ixfd64 Bemusing Prompter     "Danny" Dec 2002 California 28·32 Posts Anyone know how much of an effect this will have on GIMPS? Prime95 doesn't presently support ARM-based chips, but Mlucas should. Ernst has mentioned that it's not as fast as Prime95 and doesn't support P-1 factoring, but the difference shouldn't be that huge. Last fiddled with by ixfd64 on 2018-04-04 at 20:50
2018-04-04, 20:44   #13
ewmayer
2ω=0

Sep 2002
República de California

2×13×443 Posts

Quote:
 Originally Posted by kladner I can't get Firefox or IE to open this link. FF says: Code: Fastly error: unknown domain: community.arm.com. Please check that this domain has been added to a service. Details: cache-mdw17348-MDW
I got an "SSL no cypher overlap" error in FF v22 (such are pretty common in this very old FF version), but opened just fine in NewMoon (v27 of the PaleMoon FF dev-fork). Here is the article text:
Quote:
 Today at Hot Chips in Cupertino, I had the opportunity to present the latest update to our Armv8-A architecture, known as the Scalable Vector Extension or SVE. Before going into the technical details, key points about Armv8-A SVE are: o Arm is significantly extending the vector processing capabilities associated with AArch64 (64-bit) execution in the Arm architecture, now and into the future, enabling implementation choices for vector lengths that scale from 128 to 2048 bits. o High Performance Scientific Compute provides an excellent focus for the introduction of this technology and its associated ecosystem development. o SVE features will enable advanced vectorizing compilers to extract more fine-grain parallelism from existing code and so reduce software deployment effort. I’ll first provide some historical context. Armv7 Advanced SIMD (aka the Arm NEON instructions) is ~12 years old, a technology originally intended to accelerate media processing tasks on the main processor. It operated on well-conditioned data in memory with fixed-point and single-precision floating-point elements in sixteen 128-bit vector registers. With the move to AArch64, NEON gained full IEEE double-precision float, 64-bit integer operations, and grew the register file to thirty-two 128-bit vector registers. These evolutionary changes made NEON a better compiler target for general-purpose compute. SVE is a complementary extension that does not replace NEON, and was developed specifically for vectorization of HPC scientific workloads. Immense amounts of data are being collected today in areas such as meteorology, geology, astronomy, quantum physics, fluid dynamics, and pharmaceutical research. Exascale computing (the execution of a billion billion floating point operations, or exaFLOPs, per second) is the target that many HPC systems aspire to over the next 5-10 years. In addition, advances in data analytics and areas such as computer vision and machine learning are already increasing the demands for increased parallelization of program execution today and into the future. Over the years, considerable research has gone into determining how best to extract more data level parallelism from general-purpose programming languages such as C, C++ and Fortran. This has resulted in the inclusion of vectorization features such as gather load & scatter store, per-lane predication, and of course longer vectors. A key choice to make is the most appropriate vector length, where many factors may influence the decision: o Current implementation technology and associated power, performance and area tradeoffs. o The specific application program characteristics. o The market, which is HPC today; in common with general trends in computer architecture evolution, a growing need for longer vectors is expected in other markets in the future. Rather than specifying a specific vector length, SVE allows CPU designers to choose the most appropriate vector length for their application and market, from 128 bits up to 2048 bits per vector register. SVE also supports a vector-length agnostic (VLA) programming model that can adapt to the available vector length. Adoption of the VLA paradigm allows you to compile or hand-code your program for SVE once, and then run it at different implementation performance points, while avoiding the need to recompile or rewrite it when longer vectors appear in the future. This reduces deployment costs over the lifetime of the architecture; a program just works and executes wider and faster. Scientific workloads, mentioned earlier, have traditionally been carefully written to exploit as much data-level parallelism as possible with careful use of OpenMP pragmas and other source code annotations. It’s therefore relatively straightforward for a compiler to vectorize such code and make good use of a wider vector unit. Supercomputers are also built with the wide, high-bandwidth memory systems necessary to feed a longer vector unit. However, while HPC is a natural fit for SVE’s longer vectors, it offers an opportunity to improve vectorizing compilers that will be of general benefit over the longer term as other systems scale to support increased data level parallelism. It is worth noting at this point that Amdahl’s law tells us the theoretical limit of a task’s speedup is governed by the amount of unparallelizable code. If you succeed in vectorizing 10% of your execution and make that code run 4 times faster (e.g. a 256-bit vector allows 4x64b parallel operations), then you've reduced 1000 cycles down to 925 cycles, providing a limited speedup for the power and area cost of the extra gates. Even if you could vectorize 50% of your execution infinitely (unlikely!) you've still only doubled the overall performance. You need to be able to vectorize much more of your program to realize the potential gains from longer vectors. So SVE also introduces novel features that begin to tackle some of the barriers to compiler vectorization. The general philosophy of SVE is to make it easier for a compiler to opportunistically vectorize code where it would not normally be possible or cost effective to do so. What are the new features and the benefits of SVE compared to NEON? Code: Feature Benefit --------------------- --------------------------------------------------------------- Scalable vector length (VL) Increased parallelism while allowing implementation choice of VL VL agnostic (VLA) programming Supports a programming paradigm of write-once, run-anywhere scalable vector code Gather-load & Scatter-store Enables vectorization of complex data structures with non-linear access patterns Per-lane predication Enables vectorization of complex, nested control code containing side effects and avoidance of loop heads and tails (particularly for VLA) Predicate-driven loop control and management Reduces vectorization overhead relative to scalar code Vector partitioning and SW managed speculation Permits vectorization of uncounted loops with data-dependent exits Extended integer and FP horizontal reductions Allows vectorization of more types of reducible loop-carried dependencies Scalarized intra-vector sub-loops Supports vectorization of loops containing complex loop-carried dependencies SVE is targeted at the A64 instruction set only, as a performance enhancement associated with 64-bit computing (known as AArch64 execution in the Arm architecture). A64 is a fixed-length instruction set, where all instructions are encoded in 32 bits. Currently 75% of the A64 encoding space is already allocated, making it a precious resource. SVE occupies just a quarter of the remaining 25%, in other words one sixteenth of the A64 encoding space, as follows: The variable length aspect of SVE is managed through predication, meaning that it does not require any encoding space. Care was taken with respect to predicated execution to constrain that aspect of the encoding space. Load and store instructions are assigned half of the allocated SVE instruction space, limited by careful consideration of addressing modes. Nearly a quarter of this space remains unallocated and available for future expansion. In summary, SVE opens a new chapter for the Arm architecture in terms of the scale and opportunity for increasing levels of vector processing on Arm processor cores. It is early days for SVE tools and software, and it will take time for SVE compilers and the rest of the SVE software ecosystem to mature. HPC is the current focus and catalyst for this compiler work, and creates development momentum in areas such as Linux distributions and optimized libraries for SVE, as well as in Arm and third party tools and software. We are already engaging with key members of the Arm partnership, and will now broaden that engagement across the open-source community and wider Arm ecosystem to support development of SVE and the HPC market, enabling a path to efficient Exascale computing. Stay tuned for more information Following on from the announcement and the details provided, initial engagement with the open-source community will start with the upstreaming and review of tools support and associated standards. A Beta release of the SVE supplement to the Armv8-A Architecture Reference Manual is now available to download. Nigel Stephens is Lead ISA Architect and Arm Fellow Annotated SVE VLA programming examples can be found here.

Last fiddled with by ewmayer on 2018-04-04 at 21:16

2018-04-05, 21:16   #14

"Kieren"
Jul 2011
In My Own Galaxy!

22·2,503 Posts

Quote:
 Originally Posted by ewmayer I got an "SSL no cypher overlap" error in FF v22 (such are pretty common in this very old FF version), but opened just fine in NewMoon (v27 of the PaleMoon FF dev-fork). Here is the article text:
It was screwed up for FF 52.6.0, too. Thanks for the text!

 Similar Threads Thread Thread Starter Forum Replies Last Post jasong jasong 2 2012-12-07 05:57 diep Lounge 8 2011-05-10 21:59 Unregistered Information & Answers 4 2009-03-16 13:10 ewmayer Hardware 20 2005-06-24 01:56 PBMcL GMP-ECM 5 2005-06-04 06:12

All times are UTC. The time now is 16:46.

Wed Sep 30 16:46:21 UTC 2020 up 20 days, 13:57, 0 users, load averages: 1.93, 1.82, 1.78