Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMar2019

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 161225

Article: 161225
Subject: Re: Tiny CPUs for Slow Logic
From: gnuarm.deletethisbit@gmail.com
Date: Tue, 19 Mar 2019 04:00:05 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 6:21:24 AM UTC-4, Tom Gardner wrote:
> On 19/03/19 00:13, gnuarm.deletethisbit@gmail.com wrote:
> > Most of us have implemented small processors for logic operations that =
don't
> > need to happen at high speed.  Simple CPUs can be built into an FPGA us=
ing a
> > very small footprint much like the ALU blocks.  There are stack based
> > processors that are very small, smaller than even a few kB of memory.
> >=20
> > If they were easily programmable in something other than C would anyone=
 be
> > interested?  Or is a C compiler mandatory even for processors running v=
ery
> > small programs?
> >=20
> > I am picturing this not terribly unlike the sequencer I used many years=
 ago
> > on an I/O board for an array processor which had it's own assembler.  I=
t was
> > very simple and easy to use, but very much not a high level language.  =
This
> > would have a language that was high level, just not C rather something
> > extensible and simple to use and potentially interactive.
> Who cares about yet another processor programmed in the same
> old language. It would not have a *U*SP. In fact it would be
> "back to the 80s" :)

Sorry, I don't get what any of this means.=20


> However, if you want to make it interesting enough to pass
> the elevator test, ensure it can do things that existing
> systems find difficult.
>=20
> You should have a look at how the XMOS hardware and software
> complement each other, so that the combination allows hard
> real time operation programming in multicore systems. (Hard
> means guaranteed-by-design latencies between successive i/o
> activities)

Yeah I think the XMOS model is way more complex than what I am describing. =
 The XMOS processors are actually very complex and use lots of gates.  They=
 also don't run all that fast.  Their claim to fame is to be able to commun=
icate through shared memory as if the other CPUs were not there in the good=
 way.  Otherwise they are conventional processors, programmed in convention=
al ways.=20

The emphasis here is for the CPU to be nearly invisible as a CPU and much m=
ore like a function block.  You just have to "configure" the operation by w=
riting a bit of code.  That's why 'C' is not desirable, it would be too cum=
bersome for small code blocks.=20

Rick C.

Article: 161226
Subject: Re: Tiny CPUs for Slow Logic
From: gnuarm.deletethisbit@gmail.com
Date: Tue, 19 Mar 2019 04:09:25 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 6:27:44 AM UTC-4, Tom Gardner wrote:
>=20
> Yup. The hardware is easy. Programming is painful, but there
> are known techniques to control it...

That is 'C' world, conventional thinking.  If you can write a hello world p=
rogram without using a JTAG debugger, you should be able to write and debug=
 most programs for this core in the simulator with 100% correctness.  We ar=
en't talking about TCP/IP stacks. =20


> There's an existing commercially successful set of products in
> this domain. You get 32-core 4000MIPS processors, and the IDE
> guarantees the hard real-time performance.

And they are designed to provide MIPS, not logic functions.=20

I don't want to go too far into the GA144 since this is not what I'm talkin=
g about inserting into an FPGA, but only as an analogy.  One of the critici=
sms of that device is how hard it is to get all 144 processors cranking at =
full MIPS.  But the chip is not intended to utilize "the full MIPS" possibl=
e.  It is intended to be like an FPGA where you have CPUs available to do w=
hat you want without regard to squeezing out every possible MIPS.  No small=
 number of these processors will do nothing other than passing data and con=
trol to it's neighbors while mostly idling because that is the way they are=
 wired together.=20

The above mentioned 4000 MIPS processor is clearly intended to utilize ever=
y last MIPS.  Not at all the same and it will be programmed very differentl=
y.=20


> Programming uses a techniques created in the 70s, first
> implemented in the 80s, and which continually reappear, e.g.
> TI's DSP engines, Rust, Go etc.
>=20
> Understand XMOS's xCORE processors and xC language, see how
> they complement and support each other. I found the net result
> stunningly easy to get working first time, without having to
> continually read obscure errata!

But not at all relevant here since their focus is vastly different from pro=
viding logic functions efficiently.=20

Rick C.

Article: 161227
Subject: Re: Tiny CPUs for Slow Logic
From: gnuarm.deletethisbit@gmail.com
Date: Tue, 19 Mar 2019 04:14:52 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 6:56:42 AM UTC-4, already...@yahoo.com wrote:
> On Tuesday, March 19, 2019 at 2:13:38 AM UTC+2, gnuarm.del...@gmail.com w=
rote:
> > Most of us have implemented small processors for logic operations that =
don't need to happen at high speed.  Simple CPUs can be built into an FPGA =
using a very small footprint much like the ALU blocks.  There are stack bas=
ed processors that are very small, smaller than even a few kB of memory. =
=20
> >=20
> > If they were easily programmable in something other than C would anyone=
 be interested?  Or is a C compiler mandatory even for processors running v=
ery small programs? =20
> >=20
> > I am picturing this not terribly unlike the sequencer I used many years=
 ago on an I/O board for an array processor which had it's own assembler.  =
It was very simple and easy to use, but very much not a high level language=
.  This would have a language that was high level, just not C rather someth=
ing extensible and simple to use and potentially interactive.=20
> >=20
> > Rick C.
>=20
> It is clear that you have Forth in mind.
> It is less clear why you don't say it straight.

Because this is not about Forth.  It is about very small processors.  I wou=
ld not really bother with Forth as the programming language specifically be=
cause that would be a layer on top of what you are doing and to be efficien=
t it would need to be programmed in assembly. =20

That said, the assembly language for a stack processor is much like Forth s=
ince Forth uses a virtual stack machine as it's programming model.  So yes,=
 it would be similar to Forth.  I most likely would use Forth to write prog=
rams for these, but that is just my preference since that is the language I=
 program in. =20

But the key here is to program the CPUs in their stack oriented assembly.  =
That's not really Forth even if it is "Forth like".=20

Is that what you wanted to know? =20

Rick C.

Article: 161228
Subject: Re: Tiny CPUs for Slow Logic
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Tue, 19 Mar 2019 11:46:40 +0000
Links: << >>  << T >>  << A >>
On 19/03/19 11:00, gnuarm.deletethisbit@gmail.com wrote:
> On Tuesday, March 19, 2019 at 6:21:24 AM UTC-4, Tom Gardner wrote:
>> On 19/03/19 00:13, gnuarm.deletethisbit@gmail.com wrote:
>>> Most of us have implemented small processors for logic operations that
>>> don't need to happen at high speed.  Simple CPUs can be built into an
>>> FPGA using a very small footprint much like the ALU blocks.  There are
>>> stack based processors that are very small, smaller than even a few kB of
>>> memory.
>>> 
>>> If they were easily programmable in something other than C would anyone
>>> be interested?  Or is a C compiler mandatory even for processors running
>>> very small programs?
>>> 
>>> I am picturing this not terribly unlike the sequencer I used many years
>>> ago on an I/O board for an array processor which had it's own assembler.
>>> It was very simple and easy to use, but very much not a high level
>>> language.  This would have a language that was high level, just not C
>>> rather something extensible and simple to use and potentially
>>> interactive.
>> Who cares about yet another processor programmed in the same old language.
>> It would not have a *U*SP. In fact it would be "back to the 80s" :)
> 
> Sorry, I don't get what any of this means.
> 
> 
>> However, if you want to make it interesting enough to pass the elevator
>> test, ensure it can do things that existing systems find difficult.
>> 
>> You should have a look at how the XMOS hardware and software complement
>> each other, so that the combination allows hard real time operation
>> programming in multicore systems. (Hard means guaranteed-by-design
>> latencies between successive i/o activities)
> 
> Yeah I think the XMOS model is way more complex than what I am describing.
> The XMOS processors are actually very complex and use lots of gates.  They
> also don't run all that fast.  

Individually not especially fast, aggregate fast.


> Their claim to fame is to be able to
> communicate through shared memory as if the other CPUs were not there in the
> good way.  

Not just shared memory, *far* more interesting than that.

Up to 8 cores in a "tile" share memory.
Comms between tiles is via an interconnection network
Comms with i/o is via the same interconnection network.

At the program level there is *no* difference between comms
via shared memory and comms via interconnection network.
Nor is there any difference between comms with a i/o and
comms with other cores.

All comms is via channels. That's one thing that makes
the hardware+software environment unique.


> Otherwise they are conventional processors, programmed in
> conventional ways.

No. You are missing the key differentiating points...

Conventional processors and programming treats multicore
programming as an advanced add  on library - explicitly
so in the case of C. And a right old mess that is.

xC+xCORE *start* by presuming multicore systems, and
use a set of harmonious concepts to make multicore
programming relatively easy and predictable.


> The emphasis here is for the CPU to be nearly invisible as a CPU and much
> more like a function block.  

Why bother? What would be the *benefit*?

Yes, you can use a screw instead of a nail, but
that doesn't mean there is a benefit. Unless, of
course, you can't use a hammer.


> You just have to "configure" the operation by
> writing a bit of code.  That's why 'C' is not desirable, it would be too
> cumbersome for small code blocks.

Article: 161229
Subject: Re: Tiny CPUs for Slow Logic
From: gnuarm.deletethisbit@gmail.com
Date: Tue, 19 Mar 2019 04:52:27 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 6:26:37 AM UTC-4, David Brown wrote:
> On 19/03/2019 09:32, gnuarm.deletethisbit@gmail.com wrote:
> > On Tuesday, March 19, 2019 at 4:15:47 AM UTC-4, David Brown wrote:
> >> On 19/03/2019 01:13, gnuarm.deletethisbit@gmail.com wrote:
> >>> Most of us have implemented small processors for logic
> >>> operations that don't need to happen at high speed.  Simple CPUs
> >>> can be built into an FPGA using a very small footprint much like
> >>> the ALU blocks. There are stack based processors that are very
> >>> small, smaller than even a few kB of memory.
> >>>=20
> >>> If they were easily programmable in something other than C would=20
> >>> anyone be interested?  Or is a C compiler mandatory even for=20
> >>> processors running very small programs?
> >>>=20
> >>> I am picturing this not terribly unlike the sequencer I used
> >>> many years ago on an I/O board for an array processor which had
> >>> it's own assembler.  It was very simple and easy to use, but very
> >>> much not a high level language.  This would have a language that
> >>> was high level, just not C rather something extensible and simple
> >>> to use and potentially interactive.
> >>>=20
> >>> Rick C.
> >>>=20
> >>=20
> >> If it is going to appeal to software developers, you need C.  And
> >> it has to be reasonable, standard C, even if it is for small
> >> devices - programmers are fed up with the pains needed for special
> >> device-specific C on 8051, AVR, PIC, etc.  That does not
> >> necessarily mean it has to be fast, but it should work with
> >> standard language.  Having 16-bit size rather than 8-bit size makes
> >> a huge difference to how programmers feel about the device - aim
> >> for something like the msp430.
> >>=20
> >> You might, however, want to look at extensions for CSP-style=20
> >> communication between cpus - something like XMOS XC.
> >>=20
> >> If it is to appeal to hardware (FPGA) developers, C might not be
> >> as essential.  Some other kind of high level language, perhaps
> >> centred around state machines, might work.
> >>=20
> >> But when I see "extensible, simple to use and potentially
> >> interactive", I fear someone is thinking of Forth.  People who are
> >> very used to Forth find it a great language - but you need to
> >> understand that /nobody/ wants to learn it.  Most programmers would
> >> rather work in assembler than Forth.  You can argue that this
> >> attitude is irrational, and that Forth is not harder than other
> >> languages - you might be right.  But that doesn't change matters.
> >=20
> > Certainly this would be like Forth, but the reality is I'm thinking
> > of a Forth like CPU because they can be designed so simply.
>=20
> I appreciate that.
>=20
> I can only tell you how /I/ would feel here, and let you use that for
> what you think it is worth.  I don't claim to speak for all software
> developers, but unless other people are giving you feedback too, then
> this is the best you've got :-)  Remember, I am not trying to argue
> about the pros and cons of different designs or languages, or challenge
> you to persuade me of anything - I'm just showing you how software
> developers might react to your design ideas.

That alone is a misunderstanding of what I am suggesting.  I see no reason =
to involve "programmers".  I don't think any FPGA designer would have any t=
rouble using these processors and "programmers" are not required.  Heck, th=
e last company I worked for designed FPGAs in the software department, so e=
veryone writing HDL for FPGAs was a "programmer" so maybe the distinction i=
s less that I realize.=20


> > The F18A stack processor designed by Charles Moore is used in the
> > GA144 chip.  There are 144 of them with unusual interconnections that
> > allow the CPU to halt waiting for communications, saving power.  The
> > CPU is so small that it could be included in an FPGA as what would be
> > equivalent to a logic element.
>=20
> Yes, but look how popular the chip is - it is barely a blip in the
> landscape.  There is no doubt that this is a technologically fascinating
> device. =20

That's not the issue, I'm not proposing anyone use a GA144.=20


> However, it is very difficult to program such chips - almost no
> one is experienced with such multi-cpu arrangements, and the design
> requires a completely different way of thinking from existing software
> design. =20

Again, that's not what I am proposing.  They have hundreds of multipliers a=
nd DSP blocks in FPGAs with no one worrying about how they will tie togethe=
r.  These CPUs would be similar.=20


> Add to that a language that works backwards, and a syntax that
> looks like the cat walked across the keyboard, and you have something
> that has programmers running away.

Now you are interjecting your own thoughts.  I never suggested that cats be=
 used to program these CPUs.=20


> My experience with Forth is small and outdated, but not non-existent.

Too bad this isn't about Forth.=20


> I've worked with dozens of programming languages over the years - I've
> studied CSP, programmed in Occam, functional programming languages, lots
> of assemblies, a small amount of CPLD/FPGA work in various languages,
> and many other kinds of coding. =20

There are many areas where a "little" knowledge is a dangerous thing.  I th=
ink programming languages and especially FPGA design are among those areas.=
=20


> (Most of my work for the past years has
> been C, C++ and Python.)  I'm not afraid of learning new things.  But
> when I looked at some of the examples for the GA144, three things struck
> me.  One is that it was amazing how much they got on the device.
> Another is to wonder about the limitations you get from the this sort of
> architecture.  (That is a big turn-off with the XMOS.  It's
> fantastically easy to make nice software-based peripherals using
> hardware threads.  And fantastically easy to run out of hardware threads
> before you've made the basic peripherals you get in a $0.50
> microcontroller.)  And the third thing that comes across is how totally
> and utterly incomprehensible the software design and the programming
> examples are.  The GA144 is squarely in the category of technology that
> is cool, impressive, and useless in the real world where developers have
> to do a job, not play with toys.

I see why you started your comments with the big caveat.  You seem to have =
a bone to pick with Forth and the GA144, neither of which are what I am tal=
king about.  You've gotten ahead of yourself.=20


> Sure, it would be possible to learn this.  But there is no way I could
> justify the investment in time and effort that would entail.
>=20
> And there is no way I would want to go to a language with less safety,
> poorer typing, weaker tools, harder testing, more limited static
> checking than the development tools I can use now with C and C++.

Yes, well good thing you would never be the person who wrote any code for t=
his.  No "programmers" allowed, only FPGA designers... and no amateurs allo=
wed either.  ;)=20


> > In the same way that the other functional logic elements like the
> > block RAMs and DSP blocks are used for custom functionality which
> > requires the designer to program by whatever means is devised, these
> > tiny CPUs would not need a high level language like C.  The code in
> > them would be small enough to be considered "logic" and developed at
> > the assembly level.
>=20
> The modern way to use the DSP blocks on FPGA's is either with ready-made
> logic blocks, code generator tools like Matlab, or C to hardware
> converters.  They are not configured manually at a low level.  Even if
> when they are generated directly from VHDL or Verilog, the developer
> writes "x =3D y * z + w" with the required number of bits in each element=
,
> and the tools turn that into whatever DSP blocks are needed.

I guess I'm not modern then.  I use VHDL and like it... Yes, I actually sai=
d I like VHDL.  The HDL so many love to hate.=20

I see no reason why these devices couldn't be programmed using VHDL, but it=
 would be harder to debug.  But then I expect you are the JTAG sort as well=
.  That's not really what I'm proposing and I think you are overstating the=
 case for "press the magic button" FPGA design.=20


> The key thing you have to think about here, is who would use these tiny
> cpus, and why.  Is there a reason for using a few of them scattered
> around the device, programmed in assembly (or worse, Forth) ?  Why would
> the developer want to do that instead of just adding another software
> thread to the embedded ARM processor, where development is so much more
> familiar? =20

Because and ARM can't keep up with the logic.  An ARM is very hard to inter=
face usefully as a *part* of the logic.  That's the entire point of the F18=
A CPUs.  Each one is small enough to be dedicated to the task at hand (like=
 in the XMOS) while running at a very high speed, enough to keep up with 10=
0 MHz logic.=20


> Why would the hardware designer want them, instead of writing
> a little state machine in the language of their choice (VHDL, Verilog,
> System C, MyHDL, C-to-HDL compiler, whatever)?

That depends on what the state machine is doing.  State machines are all ad=
-hoc and produce their own little microcosm needing support.  You talk abou=
t the issues of programming CPUs.  State machines are like designing your o=
wn CPU but without any arithmetic.  Add arithmetic, data movements, etc. an=
d you have now officially designed your own CPU when you could have just us=
ed an existing CPU.=20

That's fine, if it is what you intended.  Many FPGA users add their own sof=
t core CPU to an FPGA.  Having these cores would make that unnecessary.=20

The question is why would an FPGA designer want to roll their own FSM when =
they can use the one in the CPU?=20


> I am missing the compelling use-cases here.  Yes, it is possible to make
> small and simple cpu units with a stack machine architecture, and fit
> lots of them in an FPGA.  But I don't see /why/ I would want them -
> certainly not why they are better than alternatives, and worth the
> learning curve.

Yes, but you aren't really an FPGA designer, no?  I can see your concerns a=
s a Python programmer.=20


> > People have mindsets about things and I believe this is one of them.
>=20
> Exactly.  And you have a choice here - work with people with the
> mindsets they have, or give /seriously/ compelling reasons why they
> should invest in the time and effort needed to change those mindsets.
> Wishful thinking is not the answer.

You are a programmer, not an FPGA designer.  I won't try to convince you of=
 the value of many small CPUs in an FPGA.=20


> > The GA144 is not so easy to program because people want to use it for
> > the sort of large programs they write for other fast CPUs. =20
>=20
> It is not easy to program because it is not easy to program.
> Multi-threaded or multi-process software is harder than single-threaded
> code.

I can see that you don't understand the GA144.  If you are working on a des=
ign that suits the GA144 (not that there are tons of those) it's not a bad =
device.  If I were working on a hearing aid app, I would give serious consi=
deration to this chip.  It is well suited to many types of signal processin=
g.  I once did a first pass of an oscilloscope design for it (strictly low =
bandwidth).  There are a number of apps that suit the GA144, but otherwise,=
 yes, it would be a bear to adapt to other apps.=20

But this is not about the GA144.  My point was to illustrate that you don't=
 need to be locked into the mindset of utilizing every last instruction cyc=
le.  Rather these CPUs have cycles to spare, so feel free to waste them.  T=
hat's what FPGAs are all about, wasting resources.  FPGAs have some small p=
ercentage of the die used for logic and most of the rest used for routing, =
most of which is not used.  Much of the logic is also not used.  Waste, was=
te, waste!  So a little CPU that is only used at 1% of it's MIPS capacity i=
s not wasteful if it saves a bunch of logic elsewhere in the FPGA.=20

That's the point of discussing the GA144.=20


> The tools and language here for the GA144 - based on Forth - are two
> generations behind the times.  They are totally unfamiliar to almost any
> current software developer.

And they are not relevant to this discussion.=20


> And yes, there is the question of what kind of software you would want
> to write.  People either want to write small, dedicated software - in
> which case they want a language that is familiar and they want to keep
> the code simple.  Or they want bigger projects, reusing existing code -
> in which case they /need/ a language that is standard.

Who is "they" again?  I'm not picturing this being programmed by the progra=
mming department.  To do so would mean two people would need to do a job fo=
r one person.=20


> Look at the GA144 site.  Apart from the immediate fact that it is pretty
> much a dead site, and clearly a company that has failed to take off,
> look at the examples.  A 10 Mb software Ethernet MAC ?  Who wants /that/
> in software?  A PS/2 keyboard controller?  An MD5 hash generator running
> in 16 cpus?  You can download a 100-line md5 function for C and run it
> on any processor.

Wow!  You are really fixated on the GA144. =20


> > In an
> > FPGA a very fast processor can be part of the logic rather than an
> > uber-controller riding herd over the whole chip.  But this would
> > require designers to change their thinking of how to use CPUs.  The
> > F18A runs at 700 MIPS peak rate in a 180 nm process.  Instead of one
> > or two in the FPGA like the ARMs in other FPGAs, there would be
> > hundreds, each one running at some GHz.
> >=20
> It has long been established that lots of tiny processors running really
> fast are far less use than a few big processors running really fast.
> 700 MIPS sounds marvellous, until you realise how simple and limited
> each of these instructions is.

Again, you are pursuing a MIPS argument.  It's not about using all the MIPS=
.  The MIPS are there to allow the CPU to do it's job in a short time to ke=
ep up with logic.  All the MIPS don't need to be used.=20

"A few big processors" would suck in being embedded in the logic.  The just=
 can't switch around fast enough.  You must be thinking of many SLOW proces=
sors compared to one fast processor.  Or maybe you are thinking of doing wo=
rk which is suited for a single processor like in a PC. =20

Yeah, you can use one of the ARMs in the Zynq to run Linux and then use the=
 other to interface to "real time" hardware.  But this is a far cry from wh=
at I am describing.=20


> At each step here, you have been entirely right about what can be done.
>  Yes, you can make small and simple processors - so small and simple
> that you can have lots of them at high clock speeds.
>=20
> And you have been right that using these would need a change in mindset,
> programming language, and development practice to use them.
>=20
> But nowhere do I see any good reason /why/.  No good use-cases.  If you
> want to turn the software and FPGA development world on its head, you
> need an extraordinarily good case for it.

"On it's head" is a powerful statement.  I'm just talking here.  I'm not wr=
iting a business plan.  I'm asking open minded FPGA designers what they wou=
ld use these CPUs for.=20

Rick C.

Article: 161230
Subject: Re: Tiny CPUs for Slow Logic
From: already5chosen@yahoo.com
Date: Tue, 19 Mar 2019 04:53:44 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 1:14:56 PM UTC+2, gnuarm.del...@gmail.com wro=
te:
> On Tuesday, March 19, 2019 at 6:56:42 AM UTC-4, already...@yahoo.com wrot=
e:
> > On Tuesday, March 19, 2019 at 2:13:38 AM UTC+2, gnuarm.del...@gmail.com=
 wrote:
> > > Most of us have implemented small processors for logic operations tha=
t don't need to happen at high speed.  Simple CPUs can be built into an FPG=
A using a very small footprint much like the ALU blocks.  There are stack b=
ased processors that are very small, smaller than even a few kB of memory. =
=20
> > >=20
> > > If they were easily programmable in something other than C would anyo=
ne be interested?  Or is a C compiler mandatory even for processors running=
 very small programs? =20
> > >=20
> > > I am picturing this not terribly unlike the sequencer I used many yea=
rs ago on an I/O board for an array processor which had it's own assembler.=
  It was very simple and easy to use, but very much not a high level langua=
ge.  This would have a language that was high level, just not C rather some=
thing extensible and simple to use and potentially interactive.=20
> > >=20
> > > Rick C.
> >=20
> > It is clear that you have Forth in mind.
> > It is less clear why you don't say it straight.
>=20
> Because this is not about Forth.  It is about very small processors.  I w=
ould not really bother with Forth as the programming language specifically =
because that would be a layer on top of what you are doing and to be effici=
ent it would need to be programmed in assembly. =20
>=20
> That said, the assembly language for a stack processor is much like Forth=
 since Forth uses a virtual stack machine as it's programming model.  So ye=
s, it would be similar to Forth.  I most likely would use Forth to write pr=
ograms for these, but that is just my preference since that is the language=
 I program in. =20
>=20
> But the key here is to program the CPUs in their stack oriented assembly.=
  That's not really Forth even if it is "Forth like".=20
>=20
> Is that what you wanted to know? =20
>=20
> Rick C.

I wanted to understand if there is PR element involved. Like, you afraid th=
at if you say "Forth" then most potential readers immediately stop reading.

I am not a PR consultant, but I was then I'd suggest to remove word "intera=
ctive" from description of the language that you have in mind.

BTW, I agree that coding in HDLs suck for many sorts of sequential tasks.
And I agree that having CPU that is *not* narrow in its data paths and opti=
onally not narrow in external addresses, but small/configurable in everythi=
ng else could be a good way to "offload" such parts of design away from HDL=
.
I am much less sure that stack processor is a good choice for such tasks.


Article: 161231
Subject: Re: Tiny CPUs for Slow Logic
From: gnuarm.deletethisbit@gmail.com
Date: Tue, 19 Mar 2019 04:58:53 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 7:46:44 AM UTC-4, Tom Gardner wrote:
> On 19/03/19 11:00, gnuarm.deletethisbit@gmail.com wrote:
> > On Tuesday, March 19, 2019 at 6:21:24 AM UTC-4, Tom Gardner wrote:
> >> On 19/03/19 00:13, gnuarm.deletethisbit@gmail.com wrote:
> >>> Most of us have implemented small processors for logic operations tha=
t
> >>> don't need to happen at high speed.  Simple CPUs can be built into an
> >>> FPGA using a very small footprint much like the ALU blocks.  There ar=
e
> >>> stack based processors that are very small, smaller than even a few k=
B of
> >>> memory.
> >>>=20
> >>> If they were easily programmable in something other than C would anyo=
ne
> >>> be interested?  Or is a C compiler mandatory even for processors runn=
ing
> >>> very small programs?
> >>>=20
> >>> I am picturing this not terribly unlike the sequencer I used many yea=
rs
> >>> ago on an I/O board for an array processor which had it's own assembl=
er.
> >>> It was very simple and easy to use, but very much not a high level
> >>> language.  This would have a language that was high level, just not C
> >>> rather something extensible and simple to use and potentially
> >>> interactive.
> >> Who cares about yet another processor programmed in the same old langu=
age.
> >> It would not have a *U*SP. In fact it would be "back to the 80s" :)
> >=20
> > Sorry, I don't get what any of this means.
> >=20
> >=20
> >> However, if you want to make it interesting enough to pass the elevato=
r
> >> test, ensure it can do things that existing systems find difficult.
> >>=20
> >> You should have a look at how the XMOS hardware and software complemen=
t
> >> each other, so that the combination allows hard real time operation
> >> programming in multicore systems. (Hard means guaranteed-by-design
> >> latencies between successive i/o activities)
> >=20
> > Yeah I think the XMOS model is way more complex than what I am describi=
ng.
> > The XMOS processors are actually very complex and use lots of gates.  T=
hey
> > also don't run all that fast. =20
>=20
> Individually not especially fast, aggregate fast.
>=20
>=20
> > Their claim to fame is to be able to
> > communicate through shared memory as if the other CPUs were not there i=
n the
> > good way. =20
>=20
> Not just shared memory, *far* more interesting than that.
>=20
> Up to 8 cores in a "tile" share memory.

Yes, I said that.=20

> Comms between tiles is via an interconnection network

Implementation details I don't really care about.=20


> Comms with i/o is via the same interconnection network.

Implementation details I don't really care about.=20


> At the program level there is *no* difference between comms
> via shared memory and comms via interconnection network.
> Nor is there any difference between comms with a i/o and
> comms with other cores.

Implementation details I don't really care about.=20


> All comms is via channels. That's one thing that makes
> the hardware+software environment unique.

Implementation details I don't really care about and it has no relevance to=
 the topic of embedding in an FPGA.=20


> > Otherwise they are conventional processors, programmed in
> > conventional ways.
>=20
> No. You are missing the key differentiating points...
>=20
> Conventional processors and programming treats multicore
> programming as an advanced add  on library - explicitly
> so in the case of C. And a right old mess that is.

Irrelevant in this context since this would never be used in the same way o=
f scattering many CPUs around an FPGA die.=20


> xC+xCORE *start* by presuming multicore systems, and
> use a set of harmonious concepts to make multicore
> programming relatively easy and predictable.

TL;DR


> > The emphasis here is for the CPU to be nearly invisible as a CPU and mu=
ch
> > more like a function block. =20
>=20
> Why bother? What would be the *benefit*?

Isn't that obvious?  It could do the work of a lot of FPGA logic in the sam=
e way that MCUs are used rather than FPGAs.  It's the same reason why multi=
pliers, DSP blocks and even memory is included in FPGAs, because they are m=
uch more efficient than using the fabric logic.=20


> Yes, you can use a screw instead of a nail, but
> that doesn't mean there is a benefit. Unless, of
> course, you can't use a hammer.

I guess no one uses screws, eh?=20

Rick C.

Article: 161232
Subject: Re: Tiny CPUs for Slow Logic
From: gnuarm.deletethisbit@gmail.com
Date: Tue, 19 Mar 2019 06:03:45 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 7:53:48 AM UTC-4, already...@yahoo.com wrote:
> On Tuesday, March 19, 2019 at 1:14:56 PM UTC+2, gnuarm.del...@gmail.com w=
rote:
> > On Tuesday, March 19, 2019 at 6:56:42 AM UTC-4, already...@yahoo.com wr=
ote:
> > > On Tuesday, March 19, 2019 at 2:13:38 AM UTC+2, gnuarm.del...@gmail.c=
om wrote:
> > > > Most of us have implemented small processors for logic operations t=
hat don't need to happen at high speed.  Simple CPUs can be built into an F=
PGA using a very small footprint much like the ALU blocks.  There are stack=
 based processors that are very small, smaller than even a few kB of memory=
. =20
> > > >=20
> > > > If they were easily programmable in something other than C would an=
yone be interested?  Or is a C compiler mandatory even for processors runni=
ng very small programs? =20
> > > >=20
> > > > I am picturing this not terribly unlike the sequencer I used many y=
ears ago on an I/O board for an array processor which had it's own assemble=
r.  It was very simple and easy to use, but very much not a high level lang=
uage.  This would have a language that was high level, just not C rather so=
mething extensible and simple to use and potentially interactive.=20
> > > >=20
> > > > Rick C.
> > >=20
> > > It is clear that you have Forth in mind.
> > > It is less clear why you don't say it straight.
> >=20
> > Because this is not about Forth.  It is about very small processors.  I=
 would not really bother with Forth as the programming language specificall=
y because that would be a layer on top of what you are doing and to be effi=
cient it would need to be programmed in assembly. =20
> >=20
> > That said, the assembly language for a stack processor is much like For=
th since Forth uses a virtual stack machine as it's programming model.  So =
yes, it would be similar to Forth.  I most likely would use Forth to write =
programs for these, but that is just my preference since that is the langua=
ge I program in. =20
> >=20
> > But the key here is to program the CPUs in their stack oriented assembl=
y.  That's not really Forth even if it is "Forth like".=20
> >=20
> > Is that what you wanted to know? =20
> >=20
> > Rick C.
>=20
> I wanted to understand if there is PR element involved. Like, you afraid =
that if you say "Forth" then most potential readers immediately stop readin=
g.

No, this is not about Forth.=20


> I am not a PR consultant, but I was then I'd suggest to remove word "inte=
ractive" from description of the language that you have in mind.

That is one of the advantages of this idea.  Why is "interactive" a bad thi=
ng? =20


> BTW, I agree that coding in HDLs suck for many sorts of sequential tasks.
> And I agree that having CPU that is *not* narrow in its data paths and op=
tionally not narrow in external addresses, but small/configurable in everyt=
hing else could be a good way to "offload" such parts of design away from H=
DL.
> I am much less sure that stack processor is a good choice for such tasks.

Stack processors can be made very simply.  That is the main reason to sugge=
st them.  There are simple register processors, but I find them more diffic=
ult to program. =20

I do use Forth for programming this sort of task.  I find it easy to develo=
p in.  I understand that many are so used to programming in more complicate=
d languages... or I should say, using more complicated tools, so they aren'=
t comfortable working closer to the hardware.  But when the task you are pr=
ogramming up is so simple, then you don't need the training wheels.  But th=
at is not what I am talking about here.  This is about a small processor th=
at can be made very efficiently on the FPGA die. =20

Would a small, hard core CPU likely run at GIPS in an FPGA?=20

Rick C.

Article: 161233
Subject: Re: Tiny CPUs for Slow Logic
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Tue, 19 Mar 2019 14:01:58 +0000
Links: << >>  << T >>  << A >>
On 19/03/19 11:58, gnuarm.deletethisbit@gmail.com wrote:
> On Tuesday, March 19, 2019 at 7:46:44 AM UTC-4, Tom Gardner wrote:
>> On 19/03/19 11:00, gnuarm.deletethisbit@gmail.com wrote:
>>> On Tuesday, March 19, 2019 at 6:21:24 AM UTC-4, Tom Gardner wrote:
>>>> On 19/03/19 00:13, gnuarm.deletethisbit@gmail.com wrote:
>>>>> Most of us have implemented small processors for logic operations that
>>>>> don't need to happen at high speed.  Simple CPUs can be built into an
>>>>> FPGA using a very small footprint much like the ALU blocks.  There are
>>>>> stack based processors that are very small, smaller than even a few kB of
>>>>> memory.
>>>>>
>>>>> If they were easily programmable in something other than C would anyone
>>>>> be interested?  Or is a C compiler mandatory even for processors running
>>>>> very small programs?
>>>>>
>>>>> I am picturing this not terribly unlike the sequencer I used many years
>>>>> ago on an I/O board for an array processor which had it's own assembler.
>>>>> It was very simple and easy to use, but very much not a high level
>>>>> language.  This would have a language that was high level, just not C
>>>>> rather something extensible and simple to use and potentially
>>>>> interactive.
>>>> Who cares about yet another processor programmed in the same old language.
>>>> It would not have a *U*SP. In fact it would be "back to the 80s" :)
>>>
>>> Sorry, I don't get what any of this means.
>>>
>>>
>>>> However, if you want to make it interesting enough to pass the elevator
>>>> test, ensure it can do things that existing systems find difficult.
>>>>
>>>> You should have a look at how the XMOS hardware and software complement
>>>> each other, so that the combination allows hard real time operation
>>>> programming in multicore systems. (Hard means guaranteed-by-design
>>>> latencies between successive i/o activities)
>>>
>>> Yeah I think the XMOS model is way more complex than what I am describing.
>>> The XMOS processors are actually very complex and use lots of gates.  They
>>> also don't run all that fast.
>>
>> Individually not especially fast, aggregate fast.
>>
>>
>>> Their claim to fame is to be able to
>>> communicate through shared memory as if the other CPUs were not there in the
>>> good way.
>>
>> Not just shared memory, *far* more interesting than that.
>>
>> Up to 8 cores in a "tile" share memory.
> 
> Yes, I said that.
> 
>> Comms between tiles is via an interconnection network
> 
> Implementation details I don't really care about.
> 
> 
>> Comms with i/o is via the same interconnection network.
> 
> Implementation details I don't really care about.
> 
> 
>> At the program level there is *no* difference between comms
>> via shared memory and comms via interconnection network.
>> Nor is there any difference between comms with a i/o and
>> comms with other cores.
> 
> Implementation details I don't really care about.
> 
> 
>> All comms is via channels. That's one thing that makes
>> the hardware+software environment unique.
> 
> Implementation details I don't really care about and it has no relevance to the topic of embedding in an FPGA.
> 
> 
>>> Otherwise they are conventional processors, programmed in
>>> conventional ways.
>>
>> No. You are missing the key differentiating points...
>>
>> Conventional processors and programming treats multicore
>> programming as an advanced add  on library - explicitly
>> so in the case of C. And a right old mess that is.
> 
> Irrelevant in this context since this would never be used in the same way of scattering many CPUs around an FPGA die.
> 
> 
>> xC+xCORE *start* by presuming multicore systems, and
>> use a set of harmonious concepts to make multicore
>> programming relatively easy and predictable.
> 
> TL;DR
> 
> 
>>> The emphasis here is for the CPU to be nearly invisible as a CPU and much
>>> more like a function block.
>>
>> Why bother? What would be the *benefit*?
> 
> Isn't that obvious?  It could do the work of a lot of FPGA logic in the same way that MCUs are used rather than FPGAs.  It's the same reason why multipliers, DSP blocks and even memory is included in FPGAs, because they are much more efficient than using the fabric logic.
> 
> 
>> Yes, you can use a screw instead of a nail, but
>> that doesn't mean there is a benefit. Unless, of
>> course, you can't use a hammer.
> 
> I guess no one uses screws, eh?

It is clear that you want other people to validate your
ideas, but you have no interest in
  - understanding what is available
  - understanding in what way your (vague) concepts would
    enable designers to do their job better than using
    existing technology
  - explaining your concept's USP

The first of those is a cardinal sin in my book, since
you are likely to waste your time (don't care) reinventing
a square wheel, and waste other people's time (do care)
figuring out that you aren't enabling anything new.

Good luck.

Article: 161234
Subject: Re: Tiny CPUs for Slow Logic
From: Theo Markettos <theom+news@chiark.greenend.org.uk>
Date: 19 Mar 2019 14:29:02 +0000 (GMT)
Links: << >>  << T >>  << A >>
Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
> Understand XMOS's xCORE processors and xC language, see how
> they complement and support each other. I found the net result
> stunningly easy to get working first time, without having to
> continually read obscure errata!

I can see the merits of the XMOS approach.  But I'm unclear how this relates
to the OP's proposal, which (I think) is having tiny CPUs as hard
logic blocks on an FPGA, like DSP blocks.

I completely understand the problem of running out of hardware threads, so
a means of 'just add another one' is handy.  But the issue is how to combine
such things with other synthesised logic.

The XMOS approach is fine when the hardware is uniform and the software sits
on top, but when the hardware is synthesised and the 'CPUs' sit as pieces in
a fabric containing random logic (as I think the OP is suggesting) it
becomes a lot harder to reason about what the system is doing and what the
software running on such heterogeneous cores should look like.  Only the
FPGA tools have a full view of what the system looks like, and it seems
stretching them to have them also generate software to run on these cores.

We are not talking about a multi- or many- core chip here, with the CPUs as
the primary element of compute, but the CPUs scattered around as 'state
machine elements' justs ups the complexity and makes it harder to understand
compared with the same thing synthesised out of flip-flops.

I would be interested to know what applications might use heterogenous
many-cores and what performance is achievable.

Theo

Article: 161235
Subject: Re: Tiny CPUs for Slow Logic
From: David Brown <david.brown@hesbynett.no>
Date: Tue, 19 Mar 2019 16:04:06 +0100
Links: << >>  << T >>  << A >>
On 19/03/2019 12:52, gnuarm.deletethisbit@gmail.com wrote:
> On Tuesday, March 19, 2019 at 6:26:37 AM UTC-4, David Brown wrote:
>> On 19/03/2019 09:32, gnuarm.deletethisbit@gmail.com wrote:
>>> On Tuesday, March 19, 2019 at 4:15:47 AM UTC-4, David Brown
>>> wrote:
>>>> On 19/03/2019 01:13, gnuarm.deletethisbit@gmail.com wrote:
>>>>> Most of us have implemented small processors for logic 
>>>>> operations that don't need to happen at high speed.  Simple
>>>>> CPUs can be built into an FPGA using a very small footprint
>>>>> much like the ALU blocks. There are stack based processors
>>>>> that are very small, smaller than even a few kB of memory.
>>>>> 
>>>>> If they were easily programmable in something other than C
>>>>> would anyone be interested?  Or is a C compiler mandatory
>>>>> even for processors running very small programs?
>>>>> 
>>>>> I am picturing this not terribly unlike the sequencer I used 
>>>>> many years ago on an I/O board for an array processor which
>>>>> had it's own assembler.  It was very simple and easy to use,
>>>>> but very much not a high level language.  This would have a
>>>>> language that was high level, just not C rather something
>>>>> extensible and simple to use and potentially interactive.
>>>>> 
>>>>> Rick C.
>>>>> 
>>>> 
>>>> If it is going to appeal to software developers, you need C.
>>>> And it has to be reasonable, standard C, even if it is for
>>>> small devices - programmers are fed up with the pains needed
>>>> for special device-specific C on 8051, AVR, PIC, etc.  That
>>>> does not necessarily mean it has to be fast, but it should work
>>>> with standard language.  Having 16-bit size rather than 8-bit
>>>> size makes a huge difference to how programmers feel about the
>>>> device - aim for something like the msp430.
>>>> 
>>>> You might, however, want to look at extensions for CSP-style 
>>>> communication between cpus - something like XMOS XC.
>>>> 
>>>> If it is to appeal to hardware (FPGA) developers, C might not
>>>> be as essential.  Some other kind of high level language,
>>>> perhaps centred around state machines, might work.
>>>> 
>>>> But when I see "extensible, simple to use and potentially 
>>>> interactive", I fear someone is thinking of Forth.  People who
>>>> are very used to Forth find it a great language - but you need
>>>> to understand that /nobody/ wants to learn it.  Most
>>>> programmers would rather work in assembler than Forth.  You can
>>>> argue that this attitude is irrational, and that Forth is not
>>>> harder than other languages - you might be right.  But that
>>>> doesn't change matters.
>>> 
>>> Certainly this would be like Forth, but the reality is I'm
>>> thinking of a Forth like CPU because they can be designed so
>>> simply.
>> 
>> I appreciate that.
>> 
>> I can only tell you how /I/ would feel here, and let you use that
>> for what you think it is worth.  I don't claim to speak for all
>> software developers, but unless other people are giving you
>> feedback too, then this is the best you've got :-)  Remember, I am
>> not trying to argue about the pros and cons of different designs or
>> languages, or challenge you to persuade me of anything - I'm just
>> showing you how software developers might react to your design
>> ideas.
> 
> That alone is a misunderstanding of what I am suggesting.  I see no
> reason to involve "programmers".  I don't think any FPGA designer
> would have any trouble using these processors and "programmers" are
> not required.  Heck, the last company I worked for designed FPGAs in
> the software department, so everyone writing HDL for FPGAs was a
> "programmer" so maybe the distinction is less that I realize.
> 

FPGA designers already have at least one foot in the "programmer" camp.
 An increasing proportion (AFAIUI) of FPGA design is done from a
software viewpoint, not a hardware viewpoint.  People use C-to-HDL,
Matlab, high-level languages (Scala, Python, etc.) for their FPGA
designs.  Thinking in terms of wires, registers, logic elements, etc.,
does not scale - the "hardware" part is often dominated by choosing and
connecting the right modules.  (Yes, there are other parts too, such as
clock design, IO, etc.)

I am not convinced that there really is a significant group of hardware
designers who want to program small, limited cpus using low-level
languages, but who don't want to be "mere programmers" working in C or
other common programming languages.

Again, I am failing to see the use-cases you have in mind.  It's hard to
guess what you might be talking about if you don't say.

> 
>>> The F18A stack processor designed by Charles Moore is used in
>>> the GA144 chip.  There are 144 of them with unusual
>>> interconnections that allow the CPU to halt waiting for
>>> communications, saving power.  The CPU is so small that it could
>>> be included in an FPGA as what would be equivalent to a logic
>>> element.
>> 
>> Yes, but look how popular the chip is - it is barely a blip in the 
>> landscape.  There is no doubt that this is a technologically
>> fascinating device.
> 
> That's not the issue, I'm not proposing anyone use a GA144.
> 

Fair enough.  It was an example you gave, so I ran with it.

> 
>> However, it is very difficult to program such chips - almost no one
>> is experienced with such multi-cpu arrangements, and the design 
>> requires a completely different way of thinking from existing
>> software design.
> 
> Again, that's not what I am proposing.  They have hundreds of
> multipliers and DSP blocks in FPGAs with no one worrying about how
> they will tie together.  These CPUs would be similar.
> 

You don't need to program the multipliers or DSP blocks.

Now, if you can find a way to avoid any programming of these tiny cpu
cores, you might be on to something.  When the VHDL or Verilog synthesis
tool meets a calculation with a multiply, it automatically puts in the
DSP blocks that are needed.  When it meets a large array, it
automatically infers ram blocks.  If you can ensure that when it meets
some complex sequential logic, or a state machine, that it infers a tiny
cpu and the program for it, /then/ you will have something immediately
useful.

> 
>> Add to that a language that works backwards, and a syntax that 
>> looks like the cat walked across the keyboard, and you have
>> something that has programmers running away.
> 
> Now you are interjecting your own thoughts.  I never suggested that
> cats be used to program these CPUs.
> 

I'm telling you how things look.

> 
>> My experience with Forth is small and outdated, but not
>> non-existent.
> 
> Too bad this isn't about Forth.

You say that, yet it seems to be entirely about Forth.  Or at least,
about programming cpus in assembly where the stack-based design means
the assembly language is practically Forth.

Of course, even though these devices might have a Forth-like assembly,
it would be possible to have other languages on top.  Do you have any
existing ones in mind?

> 
> 
>> I've worked with dozens of programming languages over the years -
>> I've studied CSP, programmed in Occam, functional programming
>> languages, lots of assemblies, a small amount of CPLD/FPGA work in
>> various languages, and many other kinds of coding.
> 
> There are many areas where a "little" knowledge is a dangerous thing.
> I think programming languages and especially FPGA design are among
> those areas.
> 

And there are many areas where a little knowledge is a useful thing -
programming languages and FPGA design are amongst them.  I am aware of
the limitations of my knowledge - but the breadth is a useful thing here.

> 
>> (Most of my work for the past years has been C, C++ and Python.)
>> I'm not afraid of learning new things.  But when I looked at some
>> of the examples for the GA144, three things struck me.  One is that
>> it was amazing how much they got on the device. Another is to
>> wonder about the limitations you get from the this sort of 
>> architecture.  (That is a big turn-off with the XMOS.  It's 
>> fantastically easy to make nice software-based peripherals using 
>> hardware threads.  And fantastically easy to run out of hardware
>> threads before you've made the basic peripherals you get in a
>> $0.50 microcontroller.)  And the third thing that comes across is
>> how totally and utterly incomprehensible the software design and
>> the programming examples are.  The GA144 is squarely in the
>> category of technology that is cool, impressive, and useless in the
>> real world where developers have to do a job, not play with toys.
> 
> I see why you started your comments with the big caveat.  You seem to
> have a bone to pick with Forth and the GA144, neither of which are
> what I am talking about.  You've gotten ahead of yourself.
> 

To sum up this conversation so far:

Rick: What do people think about tiny processors in an FPGA?  Will
programmers like it even if it does not support C?  Opinions, please.

David: Programmers will want C, possibly with extensions.  They won't
want Forth.

Rick: Look at the GA144, and how great it is, programmed in Forth.  The
programming world is bad because people are stuck in the wrong mindset
of using existing major programming languages and existing major
programming platforms.  They should all change to this kind of chip
because it can run at 700 MIPS with little power.

David: I can only tell you my opinion as a software developer.  The
GA144 is technically interesting, but a total failure in the
marketplace.  No one wants to use Forth.  The cpus may have high clock
speeds, but do almost nothing in each cycle.  If you want people to use
tiny cpus, you have to have a good reason and good use-cases.

Rick: If I want your opinion, I'll insult you for it.  You are clearly
wrong.  This is nothing like the GA144 cpus - I mentioned them to
confuse people.  They won't be programmed in Forth - they will be
programmed in an assembly that looks almost exactly like Forth.  You
should be basing your answers on reading my mind, not my posts.


What is it you actually want here?  Posts that confirm that you are on
the right track, and you are poised to change the FPGA world?  I am
giving you /my/ opinions, based on /my/ experience and /my/
understanding of how programmers would likely want to work - including
programmers who happen to do FPGA work.  If those opinions are of
interest to you, then great.  If you want a fight, or a sycophant, then
let me know so I can bow out of the thread.


So let's get this straight.  I don't have any "bones to pick" with the
GA144, Forth, small cpus, or anything else.  I don't have any biases
against them.  I have facts, and I have opinions based on experience.
If your opinions and experiences are different, that's okay - but don't
tell me I am ignorant, or have dangerously little knowledge, or that I
have bones to pick.


> 
>> Sure, it would be possible to learn this.  But there is no way I
>> could justify the investment in time and effort that would entail.
>> 
>> And there is no way I would want to go to a language with less
>> safety, poorer typing, weaker tools, harder testing, more limited
>> static checking than the development tools I can use now with C and
>> C++.
> 
> Yes, well good thing you would never be the person who wrote any code
> for this.  No "programmers" allowed, only FPGA designers... and no
> amateurs allowed either.  ;)

Feel free to rule out every other possible user too - especially those
that are interested in code quality.  There is a reason why software
developers want good tools and good languages - and it's not laziness or
incompetence.

> 
> 
>>> In the same way that the other functional logic elements like
>>> the block RAMs and DSP blocks are used for custom functionality
>>> which requires the designer to program by whatever means is
>>> devised, these tiny CPUs would not need a high level language
>>> like C.  The code in them would be small enough to be considered
>>> "logic" and developed at the assembly level.
>> 
>> The modern way to use the DSP blocks on FPGA's is either with
>> ready-made logic blocks, code generator tools like Matlab, or C to
>> hardware converters.  They are not configured manually at a low
>> level.  Even if when they are generated directly from VHDL or
>> Verilog, the developer writes "x = y * z + w" with the required
>> number of bits in each element, and the tools turn that into
>> whatever DSP blocks are needed.
> 
> I guess I'm not modern then.  I use VHDL and like it... Yes, I
> actually said I like VHDL.  The HDL so many love to hate.
> 

Sure, there is nothing wrong with that.  But if you want to make
something that appeals to other people, you need to be looking for
"better than the modern choices" - not "worse than the old choices".

> I see no reason why these devices couldn't be programmed using VHDL,
> but it would be harder to debug.  But then I expect you are the JTAG
> sort as well.  That's not really what I'm proposing and I think you
> are overstating the case for "press the magic button" FPGA design.
> 

I use JTAG debugging when that is the appropriate choice.  I use other
types of debugging at other times, or simulations, testing on other
platforms, etc.  Dismissing JTAG debugging as a tool is just as bad as
relying upon it for everything.

When you are thinking of a new way of doing design here, then debugging
and testing should be of prime concern.  I don't think doing it all in
VHDL will cut it.  I can agree that JTAG debugging is not going to work
well for multiple small processors - but you have to think about what
/would/ work well, rather than what won't work.

> 
>> The key thing you have to think about here, is who would use these
>> tiny cpus, and why.  Is there a reason for using a few of them
>> scattered around the device, programmed in assembly (or worse,
>> Forth) ?  Why would the developer want to do that instead of just
>> adding another software thread to the embedded ARM processor, where
>> development is so much more familiar?
> 
> Because and ARM can't keep up with the logic.  An ARM is very hard to
> interface usefully as a *part* of the logic.  That's the entire point
> of the F18A CPUs.  Each one is small enough to be dedicated to the
> task at hand (like in the XMOS) while running at a very high speed,
> enough to keep up with 100 MHz logic.
> 

The F18A devices don't keep up with the logic - not when you are doing
something more than toggling a pin at high speed.  They do so very
little each clock cycle.

But the big question here - the elephant in the room that you keep
ignoring - is what you want these devices to /do/.  Why are you trying
to make them "keep up with the logic" ?  Use hardware to do the things
that hardware is good at - fast, predictable timing, wide data, parallel
actions.  Use software for what software is good at - flexible,
sequential, conditional.  Combine them appropriately - use software to
control the hardware parts, use buffers to avoid the latency variations
in the software.  Use hardware state machines for the small, simple,
fast sequential parts.

Where do you see your new cpus being used?  Give us some examples.

> 
>> Why would the hardware designer want them, instead of writing a
>> little state machine in the language of their choice (VHDL,
>> Verilog, System C, MyHDL, C-to-HDL compiler, whatever)?
> 
> That depends on what the state machine is doing.  State machines are
> all ad-hoc and produce their own little microcosm needing support.
> You talk about the issues of programming CPUs.  State machines are
> like designing your own CPU but without any arithmetic.  Add
> arithmetic, data movements, etc. and you have now officially designed
> your own CPU when you could have just used an existing CPU.
> 
> That's fine, if it is what you intended.  Many FPGA users add their
> own soft core CPU to an FPGA.  Having these cores would make that
> unnecessary.

Equally, having soft core CPUs makes your cores unnecessary.  Sure, a
real soft core CPU is usually bigger than the cpus you imagine - but
they can do so much more.  And they can do so /today/, using tools
available /today/, that are familiar with programmers /today/.  That
massive benefit in developer efficiency outweighs the cost in logic
cells (if it doesn't, you should not be using FPGA's except to prototype
your ASICs).

> 
> The question is why would an FPGA designer want to roll their own FSM
> when they can use the one in the CPU?
> 

Equally, why should they want a special purpose mini cpu core, when they
can write their state machines as they always have done, using pure VHDL
or Verilog, or additional state machine design software ?

> 
>> I am missing the compelling use-cases here.  Yes, it is possible to
>> make small and simple cpu units with a stack machine architecture,
>> and fit lots of them in an FPGA.  But I don't see /why/ I would
>> want them - certainly not why they are better than alternatives,
>> and worth the learning curve.
> 
> Yes, but you aren't really an FPGA designer, no?  I can see your
> concerns as a Python programmer.
> 
> 
>>> People have mindsets about things and I believe this is one of
>>> them.
>> 
>> Exactly.  And you have a choice here - work with people with the 
>> mindsets they have, or give /seriously/ compelling reasons why
>> they should invest in the time and effort needed to change those
>> mindsets. Wishful thinking is not the answer.
> 
> You are a programmer, not an FPGA designer.  I won't try to convince
> you of the value of many small CPUs in an FPGA.
> 

Pretend I am an FPGA designer, and then try to explain the point of
them.  As far as I can see, this whole thing is a solution in search of
a problem.  Convince me otherwise - or at least, convince me that /you/
think otherwise.

(And while I would not classify myself as an FPGA designer, I have done
a few FPGA designs.  I am by no means an expert, but I am familiar with
the principles and the technology.)

> 
>>> The GA144 is not so easy to program because people want to use it
>>> for the sort of large programs they write for other fast CPUs.
>> 
>> It is not easy to program because it is not easy to program. 
>> Multi-threaded or multi-process software is harder than
>> single-threaded code.
> 
> I can see that you don't understand the GA144.  If you are working on
> a design that suits the GA144 (not that there are tons of those) it's
> not a bad device.  If I were working on a hearing aid app, I would
> give serious consideration to this chip.  It is well suited to many
> types of signal processing.  I once did a first pass of an
> oscilloscope design for it (strictly low bandwidth).  There are a
> number of apps that suit the GA144, but otherwise, yes, it would be a
> bear to adapt to other apps.

I can see that you didn't understand what I wrote - or perhaps you don't
understand programming and software development as much as you think.
Let me try again - the GA144 is not easy to program.  I didn't say
anything about what apps it might be good for - I said it is not easy to
program.  That is partly because it is a difficult to make designs for
such a many-processor system, partly because the language is awkward,
and partly because the skill set needed does not match well with skill
sets of most current programmers.  I am basing this on having read
through some of the material on their site, thinking that this is such a
cool chip I'd like to find an excuse to use it on a project.  But I
couldn't find any application where the time, cost and risk could be
remotely justified.

> 
> But this is not about the GA144.  My point was to illustrate that you
> don't need to be locked into the mindset of utilizing every last
> instruction cycle.  Rather these CPUs have cycles to spare, so feel
> free to waste them.

Agreed.  I am happy with that - and I am not locked into a mindset here.
 I can't see what might have given you that impression.

>  That's what FPGAs are all about, wasting
> resources.  FPGAs have some small percentage of the die used for
> logic and most of the rest used for routing, most of which is not
> used.  Much of the logic is also not used.  Waste, waste, waste!  So
> a little CPU that is only used at 1% of it's MIPS capacity is not
> wasteful if it saves a bunch of logic elsewhere in the FPGA.

FPGA development is usually about wasting space - you are using only a
small proportion of the die, but using it hard.  Software development is
usually about wasting time - in most systems, the cpu is only doing
something useful for a few percent of the time, but is running hard in
that time spot.  It is not actually that different in principle.

> 
> That's the point of discussing the GA144.
> 

I do understand that most of the cpus on a chip like that are "wasted" -
they are doing very little.  And that's fine.

> 
>> The tools and language here for the GA144 - based on Forth - are
>> two generations behind the times.  They are totally unfamiliar to
>> almost any current software developer.
> 
> And they are not relevant to this discussion.
> 
> 
>> And yes, there is the question of what kind of software you would
>> want to write.  People either want to write small, dedicated
>> software - in which case they want a language that is familiar and
>> they want to keep the code simple.  Or they want bigger projects,
>> reusing existing code - in which case they /need/ a language that
>> is standard.
> 
> Who is "they" again?  I'm not picturing this being programmed by the
> programming department.  To do so would mean two people would need to
> do a job for one person.
> 
> 
>> Look at the GA144 site.  Apart from the immediate fact that it is
>> pretty much a dead site, and clearly a company that has failed to
>> take off, look at the examples.  A 10 Mb software Ethernet MAC ?
>> Who wants /that/ in software?  A PS/2 keyboard controller?  An MD5
>> hash generator running in 16 cpus?  You can download a 100-line md5
>> function for C and run it on any processor.
> 
> Wow!  You are really fixated on the GA144.
> 

Again, /you/ brought it up.  You are trying to promote this idea of lots
of small, fast cpus on a chip.  You repeatedly refuse to give any sort
of indication what these might be used for - but you point to the
existing GA144, which is a chip with lots of small, fast cpus.  Can't
you understand why people might think that it is an example of what you
might be thinking about?  Those examples and application notes are about
the only examples I can find of uses for multiple small, fast cpus,
since you refuse to suggest any - and they are /pointless/.

So, please, tell us what you want to do with your cpus - and why they
would be so much better than existing solutions (like bigger cpus, hard
cores and soft cores, and ad hoc state machines generated by existing
tools).  I will happily leave the GA144 behind.

> 
>>> In an FPGA a very fast processor can be part of the logic rather
>>> than an uber-controller riding herd over the whole chip.  But
>>> this would require designers to change their thinking of how to
>>> use CPUs.  The F18A runs at 700 MIPS peak rate in a 180 nm
>>> process.  Instead of one or two in the FPGA like the ARMs in
>>> other FPGAs, there would be hundreds, each one running at some
>>> GHz.
>>> 
>> It has long been established that lots of tiny processors running
>> really fast are far less use than a few big processors running
>> really fast. 700 MIPS sounds marvellous, until you realise how
>> simple and limited each of these instructions is.
> 
> Again, you are pursuing a MIPS argument.  It's not about using all
> the MIPS.  The MIPS are there to allow the CPU to do it's job in a
> short time to keep up with logic.  All the MIPS don't need to be
> used.

I understand that - but you are missing the point.  Even if all the cpu
needs to do is take a 32-bit data item from /here/ and send it out
/there/, a core like the F18A is lost.  A 700 clock does /not/ let you
keep up with the logic if you can't do anything useful without lots of
clock cycles - then it is better to have a slower clock and real
functionality.

> 
> "A few big processors" would suck in being embedded in the logic.
> The just can't switch around fast enough.  You must be thinking of
> many SLOW processors compared to one fast processor.  Or maybe you
> are thinking of doing work which is suited for a single processor
> like in a PC.

An ARM Cortex M1 at 100 MHz is going to do a great deal more than an
F18A-style core at 700 MHz (though I don't expect it to get anything
like that as a soft core on an FPGA).

The SpinalHDL / VexRiscv RISC-V processor can run at 346 MHz from 481
LUT's on the Artix 7, in its smallest version.  Bigger versions have
slower clock speeds, but faster overall execution (with more variation
in latency - which may not be what you want).

And you can program these in a dozen different languages of your choice.

> 
> Yeah, you can use one of the ARMs in the Zynq to run Linux and then
> use the other to interface to "real time" hardware.  But this is a
> far cry from what I am describing.

Yes, I know that is not what you are describing.  (And "big cpu" does
not mean "Linux" - you can run a real time OS like FreeRTOS, or pure
bare metal.)



> 
> 
>> At each step here, you have been entirely right about what can be
>> done. Yes, you can make small and simple processors - so small and
>> simple that you can have lots of them at high clock speeds.
>> 
>> And you have been right that using these would need a change in
>> mindset, programming language, and development practice to use
>> them.
>> 
>> But nowhere do I see any good reason /why/.  No good use-cases.  If
>> you want to turn the software and FPGA development world on its
>> head, you need an extraordinarily good case for it.
> 
> "On it's head" is a powerful statement.  I'm just talking here.  I'm
> not writing a business plan.  I'm asking open minded FPGA designers
> what they would use these CPUs for.
> 

I am trying to be open minded, despite how often you tell me you think I
am closed.  But I am failing to see an overwhelming case for these sorts
of cpus - I think they fall between too many chairs, and they fail to be
a better choice than existing solutions.  If you can give me use-cases
where they are a big benefit, then I will be happy to reconsider - but
if you are asking where I think they have a good use, then currently I
can't think of an answer.


Article: 161236
Subject: Re: Tiny CPUs for Slow Logic
From: Svenn Are Bjerkem <svenn.bjerkem@gmail.com>
Date: Tue, 19 Mar 2019 08:24:26 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 1:13:38 AM UTC+1, gnuarm.del...@gmail.com wro=
te:
> Most of us have implemented small processors for logic operations that do=
n't need to happen at high speed.  Simple CPUs can be built into an FPGA us=
ing a very small footprint much like the ALU blocks.  There are stack based=
 processors that are very small, smaller than even a few kB of memory. =20
>=20
> If they were easily programmable in something other than C would anyone b=
e interested?  Or is a C compiler mandatory even for processors running ver=
y small programs? =20
>=20
> I am picturing this not terribly unlike the sequencer I used many years a=
go on an I/O board for an array processor which had it's own assembler.  It=
 was very simple and easy to use, but very much not a high level language. =
 This would have a language that was high level, just not C rather somethin=
g extensible and simple to use and potentially interactive.=20
>=20
> Rick C.

picoblaze is such a small cpu and I would like to program it in something e=
lse but its assembler language.=20

--=20
svenn

Article: 161237
Subject: Re: Tiny CPUs for Slow Logic
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Tue, 19 Mar 2019 16:19:32 +0000
Links: << >>  << T >>  << A >>
On 19/03/19 14:29, Theo Markettos wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
>> Understand XMOS's xCORE processors and xC language, see how
>> they complement and support each other. I found the net result
>> stunningly easy to get working first time, without having to
>> continually read obscure errata!
> 
> I can see the merits of the XMOS approach.  But I'm unclear how this relates
> to the OP's proposal, which (I think) is having tiny CPUs as hard
> logic blocks on an FPGA, like DSP blocks.

A reasonable question.

A major problem with lots of communicating sequential
processors (such as the OP suggests) is how to /think/
about orchestrating them so they compute and communicate
to produce a useful result.

Once you have such a conceptual framework, thereafter you
can develop tools to help.

Oddly enough that occurred to CAR (Tony) Hoare back in
the 70s, and he produced the CSP (communicating sequential
processes) calculus.

In the 80s that was embodied in hardware and software, the
transputers and occam respectively. The modern variant is
the xCORE processors and xC.

They provide a concrete demonstration of one set of tools
and  techniques that allow a cloud of processors to do
useful work.

That's something the GA144 conspicuously failed to achieve.

The OP appears to have a vague concept of something running
through his head, but appears unwilling to understand what
has been tried, what has failed, and where the /conceptual/
practical problems lie.

Overall the OP is a bit like the UK Parliament at the moment.
Both know what they don't want, but can't articulate/decide
what they do want.

The UK Parliament is an unmitigated dysfunctional mess.



> I completely understand the problem of running out of hardware threads, so
> a means of 'just add another one' is handy.  But the issue is how to combine
> such things with other synthesised logic.

I don't think it is difficult to combine those, any more
or less than it is difficult to combine current traditional
hardware and software.


> The XMOS approach is fine when the hardware is uniform and the software sits
> on top, but when the hardware is synthesised and the 'CPUs' sit as pieces in
> a fabric containing random logic (as I think the OP is suggesting) it
> becomes a lot harder to reason about what the system is doing and what the
> software running on such heterogeneous cores should look like.  Only the
> FPGA tools have a full view of what the system looks like, and it seems
> stretching them to have them also generate software to run on these cores.

Through long experience, I'm wary of any single tool that
claims to do everything from top to bottom. They always
work well for things that fit their constraints, but badly
otherwise.

N.B. that includes a single programming style from top to
bottom of a software application. I've used top-level FSMs
expressed in GC'ed OOP languages that had procedural runtimes.
Why? Because the application domain was inherently FSM based,
the GC'ed OOP tools were the best way to create distributed high
availability systems, and the procedural language was  the best
way to create the runtime.

I have comparable examples involving hardware all the
way from low-noise analogue electronics upwards.

Moral: choose the right conceptual framework for each part
of the problem.


> We are not talking about a multi- or many- core chip here, with the CPUs as
> the primary element of compute, but the CPUs scattered around as 'state
> machine elements' justs ups the complexity and makes it harder to understand
> compared with the same thing synthesised out of flip-flops.

It is up to the OP to give us a clue as to example problems
and solutions, and why his concepts are significantly better
than existing techniques.


> I would be interested to know what applications might use heterogenous
> many-cores and what performance is achievable.

Yup.

The "granularity" of the computation and communication will
be a key to understanding what the OP is thinking.

Article: 161238
Subject: Re: Tiny CPUs for Slow Logic
From: already5chosen@yahoo.com
Date: Tue, 19 Mar 2019 10:35:47 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote:
> On 19/03/19 14:29, Theo Markettos wrote:
> > Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
> >> Understand XMOS's xCORE processors and xC language, see how
> >> they complement and support each other. I found the net result
> >> stunningly easy to get working first time, without having to
> >> continually read obscure errata!
> >=20
> > I can see the merits of the XMOS approach.  But I'm unclear how this re=
lates
> > to the OP's proposal, which (I think) is having tiny CPUs as hard
> > logic blocks on an FPGA, like DSP blocks.
>=20
> A reasonable question.
>=20
> A major problem with lots of communicating sequential
> processors (such as the OP suggests) is how to /think/
> about orchestrating them so they compute and communicate
> to produce a useful result.
>=20
> Once you have such a conceptual framework, thereafter you
> can develop tools to help.
>=20
> Oddly enough that occurred to CAR (Tony) Hoare back in
> the 70s, and he produced the CSP (communicating sequential
> processes) calculus.
>=20

Which had surprisingly small influence on how majority (not majority in sen=
se of 70%, majority in sense of 99.7%) of the industry solve their problems=
.

> In the 80s that was embodied in hardware and software, the
> transputers and occam respectively. The modern variant is
> the xCORE processors and xC.
>=20

The same as above.

> They provide a concrete demonstration of one set of tools
> and  techniques that allow a cloud of processors to do
> useful work.
>=20
> That's something the GA144 conspicuously failed to achieve.
>=20
> The OP appears to have a vague concept of something running
> through his head, but appears unwilling to understand what
> has been tried, what has failed, and where the /conceptual/
> practical problems lie.
>=20
> Overall the OP is a bit like the UK Parliament at the moment.
> Both know what they don't want, but can't articulate/decide
> what they do want.
>=20
> The UK Parliament is an unmitigated dysfunctional mess.
>=20

Do you prefer dysfunctional mesh ;)

>=20
>=20
> > I completely understand the problem of running out of hardware threads,=
 so
> > a means of 'just add another one' is handy.  But the issue is how to co=
mbine
> > such things with other synthesised logic.
>=20
> I don't think it is difficult to combine those, any more
> or less than it is difficult to combine current traditional
> hardware and software.
>=20
>=20
> > The XMOS approach is fine when the hardware is uniform and the software=
 sits
> > on top, but when the hardware is synthesised and the 'CPUs' sit as piec=
es in
> > a fabric containing random logic (as I think the OP is suggesting) it
> > becomes a lot harder to reason about what the system is doing and what =
the
> > software running on such heterogeneous cores should look like.  Only th=
e
> > FPGA tools have a full view of what the system looks like, and it seems
> > stretching them to have them also generate software to run on these cor=
es.
>=20
> Through long experience, I'm wary of any single tool that
> claims to do everything from top to bottom. They always
> work well for things that fit their constraints, but badly
> otherwise.
>=20
> N.B. that includes a single programming style from top to
> bottom of a software application. I've used top-level FSMs
> expressed in GC'ed OOP languages that had procedural runtimes.
> Why? Because the application domain was inherently FSM based,
> the GC'ed OOP tools were the best way to create distributed high
> availability systems, and the procedural language was  the best
> way to create the runtime.
>=20
> I have comparable examples involving hardware all the
> way from low-noise analogue electronics upwards.
>=20
> Moral: choose the right conceptual framework for each part
> of the problem.
>=20
>=20
> > We are not talking about a multi- or many- core chip here, with the CPU=
s as
> > the primary element of compute, but the CPUs scattered around as 'state
> > machine elements' justs ups the complexity and makes it harder to under=
stand
> > compared with the same thing synthesised out of flip-flops.
>=20
> It is up to the OP to give us a clue as to example problems
> and solutions, and why his concepts are significantly better
> than existing techniques.
>=20
>=20
> > I would be interested to know what applications might use heterogenous
> > many-cores and what performance is achievable.
>=20
> Yup.
>=20
> The "granularity" of the computation and communication will
> be a key to understanding what the OP is thinking.

I don't know what Rick had in mind.
I personally would go for one "hard-CPU" block per 4000-5000 6-input logic =
elements (i.e. Altera ALMs or Xilinx CLBs). Each block could be configured =
either as one 64-bit core or pair of 32-bit cores. The bock would contains =
hard instruction decoders/ALUs/shifters and hard register files. It can opt=
ionally borrow adjacent DSP blocks for multipliers. Adjacent embedded memor=
y blocks can be used for data memory. Code memory should be a bit more flex=
ible giving to designer a choice between embedded memory blocks or distribu=
ted memory (X)/MLABs(A).

Article: 161239
Subject: Xilinx M1 Pad file
From: "A.P.Richelieu" <aprichelieu@gmail.com>
Date: Tue, 19 Mar 2019 20:29:47 +0100
Links: << >>  << T >>  << A >>
Is there anyone that has a description of the Xilinx M1 Pad file syntax?
An example file would do as well.

Best Regards
AP

Article: 161240
Subject: Re: Tiny CPUs for Slow Logic
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Tue, 19 Mar 2019 20:07:35 +0000
Links: << >>  << T >>  << A >>
On 19/03/19 17:35, already5chosen@yahoo.com wrote:
> On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote:
>> On 19/03/19 14:29, Theo Markettos wrote:
>>> Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
>>>> Understand XMOS's xCORE processors and xC language, see how they
>>>> complement and support each other. I found the net result stunningly
>>>> easy to get working first time, without having to continually read
>>>> obscure errata!
>>> 
>>> I can see the merits of the XMOS approach.  But I'm unclear how this
>>> relates to the OP's proposal, which (I think) is having tiny CPUs as
>>> hard logic blocks on an FPGA, like DSP blocks.
>> 
>> A reasonable question.
>> 
>> A major problem with lots of communicating sequential processors (such as
>> the OP suggests) is how to /think/ about orchestrating them so they compute
>> and communicate to produce a useful result.
>> 
>> Once you have such a conceptual framework, thereafter you can develop tools
>> to help.
>> 
>> Oddly enough that occurred to CAR (Tony) Hoare back in the 70s, and he
>> produced the CSP (communicating sequential processes) calculus.
>> 
> 
> Which had surprisingly small influence on how majority (not majority in sense
> of 70%, majority in sense of 99.7%) of the industry solve their problems.

That's principally because Moore's "law" enabled people to
avoid confronting the issues. Now that Moore's "law" has run
out of steam, the future becomes more interesting.

Note that TI included some of the concepts in its DSP processors.

Golang has included some of the concepts.

Many libraries included some of the concepts.



>> In the 80s that was embodied in hardware and software, the transputers and
>> occam respectively. The modern variant is the xCORE processors and xC.
>> 
> 
> The same as above.
> 
>> They provide a concrete demonstration of one set of tools and  techniques
>> that allow a cloud of processors to do useful work.
>> 
>> That's something the GA144 conspicuously failed to achieve.
>> 
>> The OP appears to have a vague concept of something running through his
>> head, but appears unwilling to understand what has been tried, what has
>> failed, and where the /conceptual/ practical problems lie.
>> 
>> Overall the OP is a bit like the UK Parliament at the moment. Both know
>> what they don't want, but can't articulate/decide what they do want.
>> 
>> The UK Parliament is an unmitigated dysfunctional mess.
>> 
> 
> Do you prefer dysfunctional mesh ;)

:) I'll settle for anything that /works/ predictably :(



>>> I completely understand the problem of running out of hardware threads,
>>> so a means of 'just add another one' is handy.  But the issue is how to
>>> combine such things with other synthesised logic.
>> 
>> I don't think it is difficult to combine those, any more or less than it is
>> difficult to combine current traditional hardware and software.
>> 
>> 
>>> The XMOS approach is fine when the hardware is uniform and the software
>>> sits on top, but when the hardware is synthesised and the 'CPUs' sit as
>>> pieces in a fabric containing random logic (as I think the OP is
>>> suggesting) it becomes a lot harder to reason about what the system is
>>> doing and what the software running on such heterogeneous cores should
>>> look like.  Only the FPGA tools have a full view of what the system looks
>>> like, and it seems stretching them to have them also generate software to
>>> run on these cores.
>> 
>> Through long experience, I'm wary of any single tool that claims to do
>> everything from top to bottom. They always work well for things that fit
>> their constraints, but badly otherwise.
>> 
>> N.B. that includes a single programming style from top to bottom of a
>> software application. I've used top-level FSMs expressed in GC'ed OOP
>> languages that had procedural runtimes. Why? Because the application domain
>> was inherently FSM based, the GC'ed OOP tools were the best way to create
>> distributed high availability systems, and the procedural language was  the
>> best way to create the runtime.
>> 
>> I have comparable examples involving hardware all the way from low-noise
>> analogue electronics upwards.
>> 
>> Moral: choose the right conceptual framework for each part of the problem.
>> 
>> 
>>> We are not talking about a multi- or many- core chip here, with the CPUs
>>> as the primary element of compute, but the CPUs scattered around as
>>> 'state machine elements' justs ups the complexity and makes it harder to
>>> understand compared with the same thing synthesised out of flip-flops.
>> 
>> It is up to the OP to give us a clue as to example problems and solutions,
>> and why his concepts are significantly better than existing techniques.
>> 
>> 
>>> I would be interested to know what applications might use heterogenous 
>>> many-cores and what performance is achievable.
>> 
>> Yup.
>> 
>> The "granularity" of the computation and communication will be a key to
>> understanding what the OP is thinking.
> 
> I don't know what Rick had in mind. I personally would go for one "hard-CPU"
> block per 4000-5000 6-input logic elements (i.e. Altera ALMs or Xilinx CLBs).
> Each block could be configured either as one 64-bit core or pair of 32-bit
> cores. The bock would contains hard instruction decoders/ALUs/shifters and
> hard register files. It can optionally borrow adjacent DSP blocks for
> multipliers. Adjacent embedded memory blocks can be used for data memory.
> Code memory should be a bit more flexible giving to designer a choice between
> embedded memory blocks or distributed memory (X)/MLABs(A).

It would be interesting to find an application level
description (i.e. language constructs) that
  - could be automatically mapped onto those primitives
    by a toolset
  - was useful for more than a niche subset of applications
  - was significantly better than existing tools

I wouldn't hold my breath :)

Article: 161241
Subject: Re: Tiny CPUs for Slow Logic
From: gnuarm.deletethisbit@gmail.com
Date: Tue, 19 Mar 2019 19:30:26 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos wrote:
> Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
> > Understand XMOS's xCORE processors and xC language, see how
> > they complement and support each other. I found the net result
> > stunningly easy to get working first time, without having to
> > continually read obscure errata!
>=20
> I can see the merits of the XMOS approach.  But I'm unclear how this rela=
tes
> to the OP's proposal, which (I think) is having tiny CPUs as hard
> logic blocks on an FPGA, like DSP blocks.
>=20
> I completely understand the problem of running out of hardware threads, s=
o
> a means of 'just add another one' is handy.  But the issue is how to comb=
ine
> such things with other synthesised logic.
>=20
> The XMOS approach is fine when the hardware is uniform and the software s=
its
> on top, but when the hardware is synthesised and the 'CPUs' sit as pieces=
 in
> a fabric containing random logic (as I think the OP is suggesting) it
> becomes a lot harder to reason about what the system is doing and what th=
e
> software running on such heterogeneous cores should look like.  Only the
> FPGA tools have a full view of what the system looks like, and it seems
> stretching them to have them also generate software to run on these cores=
.

When people talk about things like "software running on such heterogeneous =
cores" it makes me think they don't really understand how this could be use=
d.  If you treat these small cores like logic elements, you don't have such=
 lofty descriptions of "system software" since the software isn't created o=
ut of some global software package.  Each core is designed to do a specific=
 job just like any other piece of hardware and it has discrete inputs and o=
utputs just like any other piece of hardware.  If the hardware clock is not=
 too fast, the software can synchronize with and literally function like ha=
rdware, but implementing more complex logic than the same area of FPGA fabr=
ic might. =20

There is no need to think about how the CPUs would communicate unless there=
 is a specific need for them to do so.  The F18A uses a handshaked parallel=
 port in their design.  They seem to have done a pretty slick job of it and=
 can actually hang the processor waiting for the acknowledgement saving pow=
er and getting an instantaneous wake up following the handshake.  This can =
be used with other CPUs or=20


> We are not talking about a multi- or many- core chip here, with the CPUs =
as
> the primary element of compute, but the CPUs scattered around as 'state
> machine elements' justs ups the complexity and makes it harder to underst=
and
> compared with the same thing synthesised out of flip-flops.

Not sure what is hard to think about.  It's a CPU, a small CPU with limited=
 memory to implement small tasks that can do rather complex operations comp=
ared to a state machine really and includes memory, arithmetic and logic as=
 well as I/O without having to write a single line of HDL.  Only the actual=
 app needs to be written.=20


> I would be interested to know what applications might use heterogenous
> many-cores and what performance is achievable.

Yes, clearly not getting the concept.  Asking about heterogeneous performan=
ce is totally antithetical to this idea.=20

Rick C.

Article: 161242
Subject: Re: Tiny CPUs for Slow Logic
From: gnuarm.deletethisbit@gmail.com
Date: Tue, 19 Mar 2019 19:32:02 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 11:24:33 AM UTC-4, Svenn Are Bjerkem wrote:
> On Tuesday, March 19, 2019 at 1:13:38 AM UTC+1, gnuarm.del...@gmail.com w=
rote:
> > Most of us have implemented small processors for logic operations that =
don't need to happen at high speed.  Simple CPUs can be built into an FPGA =
using a very small footprint much like the ALU blocks.  There are stack bas=
ed processors that are very small, smaller than even a few kB of memory. =
=20
> >=20
> > If they were easily programmable in something other than C would anyone=
 be interested?  Or is a C compiler mandatory even for processors running v=
ery small programs? =20
> >=20
> > I am picturing this not terribly unlike the sequencer I used many years=
 ago on an I/O board for an array processor which had it's own assembler.  =
It was very simple and easy to use, but very much not a high level language=
.  This would have a language that was high level, just not C rather someth=
ing extensible and simple to use and potentially interactive.=20
> >=20
> > Rick C.
>=20
> picoblaze is such a small cpu and I would like to program it in something=
 else but its assembler language.=20

Yes, it is small.  How large is the program you are interested in?=20

Rick C.

Article: 161243
Subject: Re: Tiny CPUs for Slow Logic
From: David Brown <david.brown@hesbynett.no>
Date: Wed, 20 Mar 2019 11:14:17 +0100
Links: << >>  << T >>  << A >>
On 20/03/2019 03:30, gnuarm.deletethisbit@gmail.com wrote:
> On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos
> wrote:
>> Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
>>> Understand XMOS's xCORE processors and xC language, see how they
>>> complement and support each other. I found the net result 
>>> stunningly easy to get working first time, without having to 
>>> continually read obscure errata!
>> 
>> I can see the merits of the XMOS approach.  But I'm unclear how
>> this relates to the OP's proposal, which (I think) is having tiny
>> CPUs as hard logic blocks on an FPGA, like DSP blocks.
>> 
>> I completely understand the problem of running out of hardware
>> threads, so a means of 'just add another one' is handy.  But the
>> issue is how to combine such things with other synthesised logic.
>> 
>> The XMOS approach is fine when the hardware is uniform and the
>> software sits on top, but when the hardware is synthesised and the
>> 'CPUs' sit as pieces in a fabric containing random logic (as I
>> think the OP is suggesting) it becomes a lot harder to reason about
>> what the system is doing and what the software running on such
>> heterogeneous cores should look like.  Only the FPGA tools have a
>> full view of what the system looks like, and it seems stretching
>> them to have them also generate software to run on these cores.
> 
> When people talk about things like "software running on such
> heterogeneous cores" it makes me think they don't really understand
> how this could be used.  If you treat these small cores like logic
> elements, you don't have such lofty descriptions of "system software"
> since the software isn't created out of some global software package.
> Each core is designed to do a specific job just like any other piece
> of hardware and it has discrete inputs and outputs just like any
> other piece of hardware.  If the hardware clock is not too fast, the
> software can synchronize with and literally function like hardware,
> but implementing more complex logic than the same area of FPGA fabric
> might.
> 

That is software.

If you want to try to get cycle-precise control of the software and use
that precision for direct hardware interfacing, you are almost certainly
going to have a poor, inefficient and difficult design.  It doesn't
matter if you say "think of it like logic" - it is /not/ logic, it is
software, and you don't use that for cycle-precise control.  You use
when you need flexibility, calculations, and decisions.

> There is no need to think about how the CPUs would communicate unless
> there is a specific need for them to do so.  The F18A uses a
> handshaked parallel port in their design.  They seem to have done a
> pretty slick job of it and can actually hang the processor waiting
> for the acknowledgement saving power and getting an instantaneous
> wake up following the handshake.  This can be used with other CPUs or
> 

Fair enough.

> 
> 
>> We are not talking about a multi- or many- core chip here, with the
>> CPUs as the primary element of compute, but the CPUs scattered
>> around as 'state machine elements' justs ups the complexity and
>> makes it harder to understand compared with the same thing
>> synthesised out of flip-flops.
> 
> Not sure what is hard to think about.  It's a CPU, a small CPU with
> limited memory to implement small tasks that can do rather complex
> operations compared to a state machine really and includes memory,
> arithmetic and logic as well as I/O without having to write a single
> line of HDL.  Only the actual app needs to be written.
> 
> 
>> I would be interested to know what applications might use
>> heterogenous many-cores and what performance is achievable.
> 
> Yes, clearly not getting the concept.  Asking about heterogeneous
> performance is totally antithetical to this idea.
> 
> Rick C.
> 



Article: 161244
Subject: Re: Tiny CPUs for Slow Logic
From: Philipp Klaus Krause <pkk@spth.de>
Date: Wed, 20 Mar 2019 11:26:35 +0100
Links: << >>  << T >>  << A >>
Am 19.03.19 um 16:24 schrieb Svenn Are Bjerkem:
> 
> picoblaze is such a small cpu and I would like to program it in something else but its assembler language. 
> 

It would be possible to write a C compiler for it (with some
restrictions, such as functions being non-reentrant). The architecture
doesn't seem any worse than PIC. And there are / were pic14 and pic16
backends in SDCC.

Philipp

Article: 161245
Subject: Re: Tiny CPUs for Slow Logic
From: already5chosen@yahoo.com
Date: Wed, 20 Mar 2019 03:29:44 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, March 20, 2019 at 4:32:07 AM UTC+2, gnuarm.del...@gmail.com w=
rote:
> On Tuesday, March 19, 2019 at 11:24:33 AM UTC-4, Svenn Are Bjerkem wrote:
> > On Tuesday, March 19, 2019 at 1:13:38 AM UTC+1, gnuarm.del...@gmail.com=
 wrote:
> > > Most of us have implemented small processors for logic operations tha=
t don't need to happen at high speed.  Simple CPUs can be built into an FPG=
A using a very small footprint much like the ALU blocks.  There are stack b=
ased processors that are very small, smaller than even a few kB of memory. =
=20
> > >=20
> > > If they were easily programmable in something other than C would anyo=
ne be interested?  Or is a C compiler mandatory even for processors running=
 very small programs? =20
> > >=20
> > > I am picturing this not terribly unlike the sequencer I used many yea=
rs ago on an I/O board for an array processor which had it's own assembler.=
  It was very simple and easy to use, but very much not a high level langua=
ge.  This would have a language that was high level, just not C rather some=
thing extensible and simple to use and potentially interactive.=20
> > >=20
> > > Rick C.
> >=20
> > picoblaze is such a small cpu and I would like to program it in somethi=
ng else but its assembler language.=20
>=20
> Yes, it is small.  How large is the program you are interested in?=20
>=20
> Rick C.

I don't know about Svenn Are Bjerkem, but can tell you about myself.
Last time when I considered something like that and wrote enough of the pro=
gram to make measurements the program contained ~250 Nios2 instructions. I'=
d guess, on minimalistic stack machine it would take 350-400 instructions.
At the end, I didn't do it in software. Coding the same functionality in HD=
L turned out to be not hard, which probably suggests that my case was small=
er than average.

Another extreme, where I did end up using "small" soft core, it was much mo=
re like "real" software: 2300 Nios2 instructions.

Article: 161246
Subject: Re: Tiny CPUs for Slow Logic
From: already5chosen@yahoo.com
Date: Wed, 20 Mar 2019 03:41:50 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 10:07:38 PM UTC+2, Tom Gardner wrote:
> On 19/03/19 17:35, already5chosen@yahoo.com wrote:
> > On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote:
> >>=20
> >> The UK Parliament is an unmitigated dysfunctional mess.
> >>=20
> >=20
> > Do you prefer dysfunctional mesh ;)
>=20
> :) I'll settle for anything that /works/ predictably :(
>=20

UK political system is completely off-topic in comp.arch.fpga. However I'd =
say that IMHO right now your parliament is facing unusually difficult probl=
em on one hand, but at the same time it's not really "life or death" sort o=
f the problem. Having troubles and appearing non-decisive in such situation=
 is normal. It does not mean that the system is broken.

Article: 161247
Subject: Re: Tiny CPUs for Slow Logic
From: Theo <theom+news@chiark.greenend.org.uk>
Date: 20 Mar 2019 10:53:02 +0000 (GMT)
Links: << >>  << T >>  << A >>
gnuarm.deletethisbit@gmail.com wrote:
> On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos wrote:
> >
> When people talk about things like "software running on such heterogeneous
> cores" it makes me think they don't really understand how this could be
> used.  If you treat these small cores like logic elements, you don't have
> such lofty descriptions of "system software" since the software isn't
> created out of some global software package.  Each core is designed to do
> a specific job just like any other piece of hardware and it has discrete
> inputs and outputs just like any other piece of hardware.  If the hardware
> clock is not too fast, the software can synchronize with and literally
> function like hardware, but implementing more complex logic than the same
> area of FPGA fabric might.

The point is that we need to understand what the whole system is doing.  In
the XMOS case, we can look at a piece of software with N threads, running
across the cores provided on the chip.  One piece of software, distributed
over the hardware resource available - the system is doing one thing.

Your bottom-up approach means it's difficult to see the big picture of
what's going on.  That means it's hard to understand the whole system, and
to program from a whole-system perspective.

> Not sure what is hard to think about.  It's a CPU, a small CPU with
> limited memory to implement small tasks that can do rather complex
> operations compared to a state machine really and includes memory,
> arithmetic and logic as well as I/O without having to write a single line
> of HDL.  Only the actual app needs to be written.

Here are the sematic descriptions of basic logic elements:

LUT:  q = f(x,y,z)
FF:   q <= d_in  (delay of one cycle)
BRAM: q = array[addr]
DSP:  q = a*b + c

A P&R tool can build a system out of these building blocks.  It's notable
that the state-holding elements in this schema do nothing else except
holding state.  That makes writing the tools easier (and we all know how
difficult the tools already are).  In general, we don't tend to instantiate
these primitives manually but describe the higher level functions (eg a 64
bit add) in HDL and allow the tools to select appropriate primitives for us
(eg a number of fast-adder blocks chained together).

What's the logic equation of a processor?  It has state, but vastly more
state than the simplicity of a flipflop.  What pattern does the P&R tool
need to match to infer a processor?  How is any verification tool going
to understand whether the processor with software is doing the right thing?

If your answer is 'we don't need verification tools, we program by hand'
then a) software has bugs, and automated verification is a handy way to
catch them, and b) you're never going to be writing hundreds of different
mini-programs to run on each core, let alone make them correct.

If we scale the processors up a bit, I could see the merits in say a bank
of, say, 32 Cortex M0s that could be interconnected as part of the FPGA
fabric and programmed in software for dedicated tasks (for instance, read
the I2C EEPROM on the DRAM DIMM and configure the DRAM controller at boot). 
But this is an SoC construct (built using SoC builder tools, and over which
the programmer has some purview although, as it turns out, sketchier than
you might think[1]).  Such CPUs would likely be running bigger corpora of
software (for instance, the DRAM controller vendor's provided initialisation
code) which would likely be in C.  But in this case we could just use a
soft-core today (the CPU ISA is most irrelevant for this application, so a
RISC-V/Microblaze/NIOS would be fine).

[1] https://inf.ethz.ch/personal/troscoe/pubs/hotos15-gerber.pdf

I can also see another niche, at the extreme bottom end, where a CPLD might
have one of your processors plus a few hundred logic cells.  That's
essentially a microcontroller with FPGA, or an FPGA with microcontroller -
which some of the vendors already produce (although possibly not
small/cheap/low power enough).  Here I can't see the advantages of using a
stack-based CPU versus paying a bit more to program in C.  Although I don't
have experience in markets where the retail price of the product is $1, and so
every $0.001 matters.

> > I would be interested to know what applications might use heterogenous
> > many-cores and what performance is achievable.
> 
> Yes, clearly not getting the concept.  Asking about heterogeneous
> performance is totally antithetical to this idea.

You keep mentioning 700 MIPS, which suggests performance is important.  If
these are simple state machine replacements, why do we care about
performance?


In essence, your proposal has a disconnect between the situations existing
FPGA blocks are used (implemented automatically by P&R tools) and the
situations software is currently used (human-driven software and
architectural design).  It's unclear how you claim to bridge this gap.

Theo

Article: 161248
Subject: Re: Tiny CPUs for Slow Logic
From: already5chosen@yahoo.com
Date: Wed, 20 Mar 2019 03:56:47 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, March 19, 2019 at 10:07:38 PM UTC+2, Tom Gardner wrote:
> On 19/03/19 17:35, already5chosen@yahoo.com wrote:
> > On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote:
> >> The "granularity" of the computation and communication will be a key t=
o
> >> understanding what the OP is thinking.
> >=20
> > I don't know what Rick had in mind. I personally would go for one "hard=
-CPU"
> > block per 4000-5000 6-input logic elements (i.e. Altera ALMs or Xilinx =
CLBs).
> > Each block could be configured either as one 64-bit core or pair of 32-=
bit
> > cores. The bock would contains hard instruction decoders/ALUs/shifters =
and
> > hard register files. It can optionally borrow adjacent DSP blocks for
> > multipliers. Adjacent embedded memory blocks can be used for data memor=
y.
> > Code memory should be a bit more flexible giving to designer a choice b=
etween
> > embedded memory blocks or distributed memory (X)/MLABs(A).
>=20
> It would be interesting to find an application level
> description (i.e. language constructs) that
>   - could be automatically mapped onto those primitives
>     by a toolset
>   - was useful for more than a niche subset of applications
>   - was significantly better than existing tools
>=20
> I wouldn't hold my breath :)


I think, you are looking at it from wrong angle.
One doesn't really need new tools to design and simulate such things. What'=
s needed is a combinations of existing tools - compilers, assemblers, proba=
bly software simulator plug-ins into existing HDL simulators, but the later=
 is just luxury for speeding up simulations, in principle, feeding HDL simu=
lator with RTL model of the CPU core will work too.

As to niches, all "hard" blocks that we currently have in FPGAs are about n=
iches. It's extremely rare that user's design uses all or majority of the f=
eatures of given FPGA device and need LUTs, embedded memories, PLLs, multip=
lies, SERDESs, DDR DRAM I/O blocks etc in exact amounts appearing in the de=
vice.
It still makes sense, economically, to have them all built in, because mask=
s and other NREs are mighty expensive while silicon itself is relatively ch=
eap. Multiple small hard CPU cores are really not very different from featu=
res, mentioned above.

Article: 161249
Subject: Re: Tiny CPUs for Slow Logic
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Wed, 20 Mar 2019 11:00:52 +0000
Links: << >>  << T >>  << A >>
On 20/03/19 10:41, already5chosen@yahoo.com wrote:
> On Tuesday, March 19, 2019 at 10:07:38 PM UTC+2, Tom Gardner wrote:
>> On 19/03/19 17:35, already5chosen@yahoo.com wrote:
>>> On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote:
>>>>
>>>> The UK Parliament is an unmitigated dysfunctional mess.
>>>>
>>>
>>> Do you prefer dysfunctional mesh ;)
>>
>> :) I'll settle for anything that /works/ predictably :(
>>
> 
> UK political system is completely off-topic in comp.arch.fpga. However I'd say that IMHO right now your parliament is facing unusually difficult problem on one hand, but at the same time it's not really "life or death" sort of the problem. Having troubles and appearing non-decisive in such situation is normal. It does not mean that the system is broken.
> 

Firstly, you chose to snip the analogy, thus removing the context.

Secondly, actually currently there are /very/ plausible reasons
to believe it might be life or death for my 98yo mother, and
may hasten my death. No, I'm not going to elaborate on a public
forum.

I will note that Operation Yellowhammer will, barring miracles,
be started on Monday, and that a prominent *brexiteer* (Michael Gove)
is shit scared of a no-deal exit because all the chemicals required
to purify our drinking water come from Europe.



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMar2019

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search