Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMar2019

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 160025

Article: 160025
Subject: Re: increment or decrement one of 16, 16-bit registers
From: rickman <gnuarm@gmail.com>
Date: Sat, 13 May 2017 17:08:46 -0400
Links: << >>  << T >>  << A >>
On 5/13/2017 4:07 PM, Jecel wrote:
> On Thursday, May 11, 2017 at 7:21:33 PM UTC-3, rickman wrote:
>> On 5/11/2017 5:55 PM, Kevin Neilson wrote:
>>> 24 cycles?  Holy smokes.  I remember most of the 6502 instructions
>>> being 2-3 cycles.
>>
>> No one ever said the 1802 was fast.  If you want slow, you should have
>> seen the 1801!  lol ;)
>
> Indeed, many early microprocessors looked a lot more impressive until you saw how many clock cycles each instruction took.
>
> But it is important to remember that there were two different clock styles and it is complicated to compare them directly.
>
> The 6502, 6800 and ARM2 used two non overlapping clocks. This required two pins and a more complicated external circuit but simplified the internal circuit. In a 1MHz 6502, for example, you have four different times in which things happen in each microsecond: when clock 1 is high, when both are low, when clock 2 is high and when both are low again.
>
> Many processors had a single clock pin, which allowed you to use a simple oscillator externally. But to have the same functionality of the 1MHz 6502 this single clock would have to be 4MHz so you could do four things in each microsecond. This was the case of the 68000, for example. The Z80 only needed to do three things.

I think the single vs. multiple clock issue was more of a evolutionary 
thing.  The early processors (including the 8080) required multiple 
phases on the supplied clocks.  After some time the new processors hid 
that clock generation internally and allowed the user to supply just a 
single clock phase.  Heck, I recall my TMS9900 had four non-overlapping 
clock phases and came in a huge 64 pin package.  I still have that board 
in the basement.

-- 

Rick C

Article: 160026
Subject: Re: increment or decrement one of 16, 16-bit registers
From: Tim Wescott <tim@seemywebsite.really>
Date: Sat, 13 May 2017 16:21:08 -0500
Links: << >>  << T >>  << A >>
On Sat, 13 May 2017 13:07:40 -0700, Jecel wrote:

> On Thursday, May 11, 2017 at 7:21:33 PM UTC-3, rickman wrote:
>> On 5/11/2017 5:55 PM, Kevin Neilson wrote:
>> > 24 cycles?  Holy smokes.  I remember most of the 6502 instructions
>> > being 2-3 cycles.
>> 
>> No one ever said the 1802 was fast.  If you want slow, you should have
>> seen the 1801!  lol ;)
> 
> Indeed, many early microprocessors looked a lot more impressive until
> you saw how many clock cycles each instruction took.
> 
> But it is important to remember that there were two different clock
> styles and it is complicated to compare them directly.
> 
> The 6502, 6800 and ARM2 used two non overlapping clocks. This required
> two pins and a more complicated external circuit but simplified the
> internal circuit. In a 1MHz 6502, for example, you have four different
> times in which things happen in each microsecond: when clock 1 is high,
> when both are low, when clock 2 is high and when both are low again.
> 
> Many processors had a single clock pin, which allowed you to use a
> simple oscillator externally. But to have the same functionality of the
> 1MHz 6502 this single clock would have to be 4MHz so you could do four
> things in each microsecond. This was the case of the 68000, for example.
> The Z80 only needed to do three things.
> 
> -- Jecel

At least the internal timing of the 1802 shows some things happening on 
half-clock boundaries.  I'm not sure if this reflects to a requirement 
for a 50% duty cycle clock, however.



-- 
www.wescottdesign.com

Article: 160027
Subject: Pipelining on Multiple Clock Edges
From: rickman <gnuarm@gmail.com>
Date: Sat, 13 May 2017 17:52:35 -0400
Links: << >>  << T >>  << A >>
I recall a processor implementation where the guy tried to say that one 
particular part of the pipeline design had a register inserted which was 
clocked on the negative edge.  I could never see how this would 
positively impact anything.  In fact, the setup and hold time of the 
register, not to mention the routing time, would add to the delay in 
that pipeline stage.

Was I missing something or is this ever used to advantage?

-- 

Rick C

Article: 160028
Subject: Re: increment or decrement one of 16, 16-bit registers
From: Allan Herriman <allanherriman@hotmail.com>
Date: 14 May 2017 01:16:54 GMT
Links: << >>  << T >>  << A >>
On Sat, 13 May 2017 13:07:40 -0700, Jecel wrote:

> On Thursday, May 11, 2017 at 7:21:33 PM UTC-3, rickman wrote:
>> On 5/11/2017 5:55 PM, Kevin Neilson wrote:
>> > 24 cycles?  Holy smokes.  I remember most of the 6502 instructions
>> > being 2-3 cycles.
>> 
>> No one ever said the 1802 was fast.  If you want slow, you should have
>> seen the 1801!  lol ;)
> 
> Indeed, many early microprocessors looked a lot more impressive until
> you saw how many clock cycles each instruction took.
> 
> But it is important to remember that there were two different clock
> styles and it is complicated to compare them directly.
> 
> The 6502, 6800 and ARM2 used two non overlapping clocks. This required
> two pins and a more complicated external circuit but simplified the
> internal circuit. In a 1MHz 6502, for example, you have four different
> times in which things happen in each microsecond: when clock 1 is high,
> when both are low, when clock 2 is high and when both are low again.
> 
> Many processors had a single clock pin, which allowed you to use a
> simple oscillator externally. But to have the same functionality of the
> 1MHz 6502 this single clock would have to be 4MHz so you could do four
> things in each microsecond. This was the case of the 68000, for example.
> The Z80 only needed to do three things.
> 
> -- Jecel


Motorola's MC6809 was available in both clocking varieties - The 'E' 
suffix part number was the one with the external clock generator and two 
quadrature clock input pins (called E and Q).
The non-'E' suffix part number had one clock input pin (EXTAL) and 
divided by four internally.  E and Q were outputs in this case.
A pin (MRDY) was available to freeze the divide by four counter to insert 
wait states.

It had an 8 bit ALU.  16 bit operations took two cycles, and the 8 x 8 
multiply took 8 cycles.


I vaguely recall wire wrapping one of these as a hobby project in the 
early to mid '80s.

Regards,
Allan

Article: 160029
Subject: Re: increment or decrement one of 16, 16-bit registers
From: rickman <gnuarm@gmail.com>
Date: Sat, 13 May 2017 21:45:02 -0400
Links: << >>  << T >>  << A >>
On 5/13/2017 9:16 PM, Allan Herriman wrote:
>
> I vaguely recall wire wrapping one of these as a hobby project in the
> early to mid '80s.

I vaguely recall wire wrapping!

-- 

Rick C

Article: 160030
Subject: Re: Pipelining on Multiple Clock Edges
From: Gabor <nospam@nospam.com>
Date: Sun, 14 May 2017 16:14:00 -0400
Links: << >>  << T >>  << A >>
On Saturday, 5/13/2017 5:52 PM, rickman wrote:
> I recall a processor implementation where the guy tried to say that one 
> particular part of the pipeline design had a register inserted which was 
> clocked on the negative edge.  I could never see how this would 
> positively impact anything.  In fact, the setup and hold time of the 
> register, not to mention the routing time, would add to the delay in 
> that pipeline stage.
> 
> Was I missing something or is this ever used to advantage?
> 

Opposite edge pipe registers can be useful if your clock distribution
scheme is not able to guarantee the required hold time.  I've used
this in early Xilinx parts that had only 4 internal clock buffers
and I needed to bring in more (relatively slow) inputs using an
additional clock.  In those parts you could use "low skew nets" to
route a clock, but even then you'd have hold time issues.  In that
particular design everything on the poorly routed clocks went back
and forth between clock edges.  That included things like counters,
which would typically use a single N-wide register and feedback from
their own outputs.  Instead I needed two N-wide registers (one on
each clock) to remove hold time in the feedback paths.  Obviously
this would be painful to do a whole design in, but for me it worked
enough to get the data into distributed RAM for transfer to one of
the internal global clock domains.

-- 
Gabor

Article: 160031
Subject: Re: Pipelining on Multiple Clock Edges
From: rickman <gnuarm@gmail.com>
Date: Mon, 15 May 2017 01:14:45 -0400
Links: << >>  << T >>  << A >>
On 5/14/2017 4:14 PM, Gabor wrote:
> On Saturday, 5/13/2017 5:52 PM, rickman wrote:
>> I recall a processor implementation where the guy tried to say that
>> one particular part of the pipeline design had a register inserted
>> which was clocked on the negative edge.  I could never see how this
>> would positively impact anything.  In fact, the setup and hold time of
>> the register, not to mention the routing time, would add to the delay
>> in that pipeline stage.
>>
>> Was I missing something or is this ever used to advantage?
>>
>
> Opposite edge pipe registers can be useful if your clock distribution
> scheme is not able to guarantee the required hold time.  I've used
> this in early Xilinx parts that had only 4 internal clock buffers
> and I needed to bring in more (relatively slow) inputs using an
> additional clock.  In those parts you could use "low skew nets" to
> route a clock, but even then you'd have hold time issues.  In that
> particular design everything on the poorly routed clocks went back
> and forth between clock edges.  That included things like counters,
> which would typically use a single N-wide register and feedback from
> their own outputs.  Instead I needed two N-wide registers (one on
> each clock) to remove hold time in the feedback paths.  Obviously
> this would be painful to do a whole design in, but for me it worked
> enough to get the data into distributed RAM for transfer to one of
> the internal global clock domains.

This is an issue of poor clock distribution.  The guy using the opposite 
edge registers was saying it added a pipeline stage the same as the 
positive edge registers.  Even if this was done for all logic on all 
stages it would not be the same as adding more positive edge registers 
because it doesn't speed up the clock.  In fact the added setup and hold 
time of the added register slows down the circuit.

-- 

Rick C

Article: 160032
Subject: Re: Pipelining on Multiple Clock Edges
From: lasselangwadtchristensen@gmail.com
Date: Mon, 15 May 2017 09:16:43 -0700 (PDT)
Links: << >>  << T >>  << A >>
Den l=C3=B8rdag den 13. maj 2017 kl. 23.52.37 UTC+2 skrev rickman:
> I recall a processor implementation where the guy tried to say that one=
=20
> particular part of the pipeline design had a register inserted which was=
=20
> clocked on the negative edge.  I could never see how this would=20
> positively impact anything.  In fact, the setup and hold time of the=20
> register, not to mention the routing time, would add to the delay in=20
> that pipeline stage.
>=20
> Was I missing something or is this ever used to advantage?
>=20

I guess there could be some way that the logic going to and from that regis=
ter
is fast enough that it would be possible to get and extra cycle for free=20

Article: 160033
Subject: Re: increment or decrement one of 16, 16-bit registers
From: Tim Wescott <tim@seemywebsite.really>
Date: Mon, 15 May 2017 11:37:32 -0500
Links: << >>  << T >>  << A >>
On Wed, 10 May 2017 16:42:59 -0500, Tim Wescott wrote:

> I've been geeking out on the COSMAC 1802 lately -- it was the first
> processor that I owned all just for me, and that I wrote programs for
> (in machine code -- not assembly).
> 
> One of the features of this chip is that while the usual ALU is 8-bit
> and centered around memory fetches and the accumulator (which they call
> the 'D' register), there's a 16 x 16-bit register file.  Any one of
> these registers can be incremented or decremented, either as an explicit
> instruction or as part of a fetch (basically, you can use any one of
> them as an index, and you can "fetch and increment").
> 
> How would you do this most effectively today?  How might it have been
> done back in the mid 1970's when RCA made the chip?  Would it make a
> difference if you were working with a CPLD, FPGA, or some ASIC where you
> were determined to minimize chip area?
> 
> I'm assuming that the original had one selectable increment/decrement
> unit that wrote back numbers to the registers, but I could see them
> implementing each register as a loadable counter -- I just don't have a
> good idea of what might use the least real estate.
> 
> Thanks.

Found the patent for the 1801; it shows the increment unit as separate 
from the registers, and separate from the ALU.  I would assume that if 
it's to be binding, the patent needs to reflect what's actually there, at 
least in the large:

http://www.cosmacelf.com/publications/data-sheets/cdp1802-rca.pdf

-- 
www.wescottdesign.com

Article: 160034
Subject: Re: Pipelining on Multiple Clock Edges
From: Jecel <jecel@merlintec.com>
Date: Mon, 15 May 2017 11:01:49 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Saturday, May 13, 2017 at 6:52:37 PM UTC-3, rickman wrote:
> I recall a processor implementation where the guy tried to say that one=
=20
> particular part of the pipeline design had a register inserted which was=
=20
> clocked on the negative edge.  I could never see how this would=20
> positively impact anything.  In fact, the setup and hold time of the=20
> register, not to mention the routing time, would add to the delay in=20
> that pipeline stage.

Sometimes you want a pipeline stage to work in a different clock phase from=
 other stages. This is sometimes done to fit the write-back stage and the o=
p fetch stage in the same clock cycle. Another example was the original MIP=
S 2000 and how it used the same pins for both the instruction and data cach=
es by using a different phase for the fetch pipeline stage.

And while it is something different, see how the three stage ARM Cortex M0+=
 pipeline is made to look like a two stage pipeline:

http://microchipdeveloper.com/32arm:m0-pipeline

The alternative is to use a clock with twice the frequency and have enables=
 that make some stages work on even clocks and others on odd ones.

-- Jecel

Article: 160035
Subject: Re: Pipelining on Multiple Clock Edges
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Mon, 15 May 2017 11:29:35 -0700 (PDT)
Links: << >>  << T >>  << A >>
> Was I missing something or is this ever used to advantage?

I imagine it was used to transfer slack from one stage to another.  Imagine=
 it's 1976, and you have everything laid out, but then you find that you ha=
ve some stage with negative slack (let's say a multiplier) followed by a st=
age with positive slack (let's say a mux).  It's hard to move registers bac=
k into the multiplier, partly because it would increase the number of FFs, =
and partly because it's 1976 and you'd have to re-tape everything.  So you =
just have the mux grab the data on the falling clock edge, transferring hal=
f a period of slack from the mux to the multiplier so the multiplier has 1.=
5 cycles and the mux has 0.5.  Something like that.

Article: 160036
Subject: Re: Pipelining on Multiple Clock Edges
From: "Rick C. Hodgin" <rick.c.hodgin@gmail.com>
Date: Mon, 15 May 2017 11:31:04 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Saturday, May 13, 2017 at 5:52:37 PM UTC-4, rickman wrote:
> I recall a processor implementation where the guy tried to say that one 
> particular part of the pipeline design had a register inserted which was 
> clocked on the negative edge.  I could never see how this would 
> positively impact anything.  In fact, the setup and hold time of the 
> register, not to mention the routing time, would add to the delay in 
> that pipeline stage.
> 
> Was I missing something or is this ever used to advantage?

I don't know if you have seen this before, but something similar is
described in the book, "But How Do It Know?" by J. Scott Clark:

    https://www.amazon.com/But-How-Know-Principles-Computers/dp/0615303765

Someone made a video describing how it is useful for certain types
of slow-clock CPUs:

    https://www.youtube.com/watch?v=cNN_tTXABUA

If you look, the computation takes place nearer to the positive edge,
and the write operating takes place nearer to the negative edge, so
that enough time takes place in-between to conduct the workload.

I've seen several designs which trigger in this way.  There are also
several methods described in (I believe) Lattice documentation, which
shows how to merge multiple clock signals together to obtain a clock
signal that will dwell fire around the negative edge, and dwell fire
around the positive edge for various purposes.

Thank you,
Rick C. Hodgin

Article: 160037
Subject: Re: Pipelining on Multiple Clock Edges
From: Gabor <nospam@nospam.com>
Date: Mon, 15 May 2017 14:59:09 -0400
Links: << >>  << T >>  << A >>
On Monday, 5/15/2017 2:29 PM, Kevin Neilson wrote:
>> Was I missing something or is this ever used to advantage?
> 
> I imagine it was used to transfer slack from one stage to another.  Imagine it's 1976, and you have everything laid out, but then you find that you have some stage with negative slack (let's say a multiplier) followed by a stage with positive slack (let's say a mux).  It's hard to move registers back into the multiplier, partly because it would increase the number of FFs, and partly because it's 1976 and you'd have to re-tape everything.  So you just have the mux grab the data on the falling clock edge, transferring half a period of slack from the mux to the multiplier so the multiplier has 1.5 cycles and the mux has 0.5.  Something like that.
> 

That implies that the minimum prop delay of the multiplier is
guaranteed to be more than 1/2 clock period.  Probably also a
good bet in 1976.  In any case this doesn't represent a pipe
stage for 1/2 clock but rather for 1 1/2 clocks.

-- 
Gabor

Article: 160038
Subject: Re: Pipelining on Multiple Clock Edges
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Mon, 15 May 2017 17:20:01 -0700 (PDT)
Links: << >>  << T >>  << A >>
> That implies that the minimum prop delay of the multiplier is
> guaranteed to be more than 1/2 clock period.  Probably also a
> good bet in 1976.  In any case this doesn't represent a pipe
> stage for 1/2 clock but rather for 1 1/2 clocks.
> 
Yes, it depends on mintimes so it's a poor design technique and would probably stop working when you shrink the die.  

Article: 160039
Subject: Configuration fault recovery
From: Yannick Lamarre <yan.lamarre@gmail.com>
Date: Tue, 16 May 2017 13:15:55 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi all,
I've been thinking about this problem for a while and shared it with a few colleagues, but no one has yet to come up with an answer.
For some configuration, an FPGA can be configured so that two different drivers are connected on that same line internally. A practical example would be two BUFGs driving the same line on a Spartan6.
If those two drivers are driving a different value in a CMOS process, it will connect both rails together on a low impedance line. Obviously, this will cause damages to the chip.
Now the question is: How long can it stay in this state before it breaks?
An easier starter question: What is likely to break first and how?
The follow up to all of this is, can we design a current-limiter/cut-off circuit fast enough to prevent destruction of the chip?

Regards,
Yannick Lamarre

Article: 160040
Subject: Test Driven Design?
From: Tim Wescott <tim@seemywebsite.really>
Date: Tue, 16 May 2017 15:21:49 -0500
Links: << >>  << T >>  << A >>
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development, 
because It Just Works -- you lengthen the time to integration a bit, but 
vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional "make 
a test bench" is part way there, but as presented in my textbook* doesn't 
impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent 
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer, 
1998.

-- 
www.wescottdesign.com

Article: 160041
Subject: Re: Configuration fault recovery
From: BobH <wanderingmetalhead.nospam.please@yahoo.com>
Date: Tue, 16 May 2017 15:04:41 -0700
Links: << >>  << T >>  << A >>
On 05/16/2017 01:15 PM, Yannick Lamarre wrote:
> Hi all,
> I've been thinking about this problem for a while and shared it with a few colleagues, but no one has yet to come up with an answer.
> For some configuration, an FPGA can be configured so that two different drivers are connected on that same line internally. A practical example would be two BUFGs driving the same line on a Spartan6.
> If those two drivers are driving a different value in a CMOS process, it will connect both rails together on a low impedance line. Obviously, this will cause damages to the chip.

I don't think that the tool chain will let you do that. There are 
several steps that should be able to catch it and error out. This is 
assuming that you are using a "mature" tool chain.

Try manually instantiating two drivers to the same clock line and run it 
through the tools. It may disconnect one for you or it may just refuse 
to complete. If it automagically disconnects one for you, it may take 
some real digging in the log files to find it, but I think it will just 
error out.

BobH



Article: 160042
Subject: Re: Test Driven Design?
From: Theo Markettos <theom+news@chiark.greenend.org.uk>
Date: 17 May 2017 00:47:39 +0100 (BST)
Links: << >>  << T >>  << A >>
Tim Wescott <tim@seemywebsite.really> wrote:
> Anyone doing any test driven design for FPGA work?
> 
> I've gone over to doing it almost universally for C++ development, 
> because It Just Works -- you lengthen the time to integration a bit, but 
> vastly shorten the actual integration time.
> 
> I did a web search and didn't find it mentioned -- the traditional "make 
> a test bench" is part way there, but as presented in my textbook* doesn't 
> impose a comprehensive suite of tests on each module.
> 
> So is no one doing it, or does it have another name, or an equivalent 
> design process with a different name, or what?

We do it.  We have an equivalence checker that fuzzes random inputs to
both the system and an executable 'golden model' of the system, looking for
discrepancies.  If found, it'll then reduce down to a minimal example.

In particular this is very handy because running the test cases is then
synthesisable: so we can run the tests on FPGA rather than on a simulator.

Our paper has more details and the code is open source:
https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/201509-memocode2015-bluecheck.pdf

Theo

Article: 160043
Subject: Re: Configuration fault recovery
From: Yannick Lamarre <yan.lamarre@gmail.com>
Date: Wed, 17 May 2017 08:40:44 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, May 16, 2017 at 5:59:27 PM UTC-4, BobH wrote:
> On 05/16/2017 01:15 PM, Yannick Lamarre wrote:
> > Hi all,
> > I've been thinking about this problem for a while and shared it with a =
few colleagues, but no one has yet to come up with an answer.
> > For some configuration, an FPGA can be configured so that two different=
 drivers are connected on that same line internally. A practical example wo=
uld be two BUFGs driving the same line on a Spartan6.
> > If those two drivers are driving a different value in a CMOS process, i=
t will connect both rails together on a low impedance line. Obviously, this=
 will cause damages to the chip.
>=20
> I don't think that the tool chain will let you do that. There are=20
> several steps that should be able to catch it and error out. This is=20
> assuming that you are using a "mature" tool chain.
>=20
> Try manually instantiating two drivers to the same clock line and run it=
=20
> through the tools. It may disconnect one for you or it may just refuse=20
> to complete. If it automagically disconnects one for you, it may take=20
> some real digging in the log files to find it, but I think it will just=
=20
> error out.
>=20
> BobH

Hi Bob,
You are skipping the mental exercise here. What about if some cosmic rays t=
oggle the configuration bits so that the scenario happens? Highly possible =
in space. This is why there is a market for SEU controllers/monitors and th=
e likes. Now, back to the drawing board.

Article: 160044
Subject: Re: Test Driven Design?
From: Ilya Kalistru <stebanoid@gmail.com>
Date: Wed, 17 May 2017 08:43:42 -0700 (PDT)
Links: << >>  << T >>  << A >>
I do a sloppy version of it. 
Sometime I allow myself not to do tests for some simple and small modules which will be tested on a higher hierarchy level anyway. (Because I also make tests for modules of different hierarchy levels)
Tests are randomized and they are launched with different seeds every time. If there is a problem, I also can launch the faulty test with a specific seed to repeat the problem.

Article: 160045
Subject: Re: Test Driven Design?
From: rickman <gnuarm@gmail.com>
Date: Wed, 17 May 2017 11:47:10 -0400
Links: << >>  << T >>  << A >>
On 5/16/2017 4:21 PM, Tim Wescott wrote:
> Anyone doing any test driven design for FPGA work?
>
> I've gone over to doing it almost universally for C++ development,
> because It Just Works -- you lengthen the time to integration a bit, but
> vastly shorten the actual integration time.
>
> I did a web search and didn't find it mentioned -- the traditional "make
> a test bench" is part way there, but as presented in my textbook* doesn't
> impose a comprehensive suite of tests on each module.
>
> So is no one doing it, or does it have another name, or an equivalent
> design process with a different name, or what?
>
> * "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
> 1998.

I'm not clear on all of the details of what defines "test driven 
design", but I believe I've been using that all along.  I've thought of 
this as bottom up development where the lower level code is written 
first *and thoroughly tested* before writing the next level of code.

How does "test driven design" differ from this significantly?

-- 

Rick C

Article: 160046
Subject: Re: Test Driven Design?
From: Rob Gaddi <rgaddi@highlandtechnology.invalid>
Date: Wed, 17 May 2017 09:35:12 -0700
Links: << >>  << T >>  << A >>
On 05/16/2017 01:21 PM, Tim Wescott wrote:
> Anyone doing any test driven design for FPGA work?
>
> I've gone over to doing it almost universally for C++ development,
> because It Just Works -- you lengthen the time to integration a bit, but
> vastly shorten the actual integration time.
>
> I did a web search and didn't find it mentioned -- the traditional "make
> a test bench" is part way there, but as presented in my textbook* doesn't
> impose a comprehensive suite of tests on each module.
>
> So is no one doing it, or does it have another name, or an equivalent
> design process with a different name, or what?
>
> * "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
> 1998.
>

We don't do classical "first you write a testbench and prove it fails, 
then you write the code that makes it pass" TDD but we do a whole lot of 
unit testing before we try to integrate submodules into the larger design.

I get a ton of mileage from OSVVM (http://osvvm.org/) for constrained 
random verification.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.

Article: 160047
Subject: Re: Test Driven Design?
From: Tim Wescott <seemywebsite@myfooter.really>
Date: Wed, 17 May 2017 12:17:13 -0500
Links: << >>  << T >>  << A >>
On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:

> On 5/16/2017 4:21 PM, Tim Wescott wrote:
>> Anyone doing any test driven design for FPGA work?
>>
>> I've gone over to doing it almost universally for C++ development,
>> because It Just Works -- you lengthen the time to integration a bit,
>> but vastly shorten the actual integration time.
>>
>> I did a web search and didn't find it mentioned -- the traditional
>> "make a test bench" is part way there, but as presented in my textbook*
>> doesn't impose a comprehensive suite of tests on each module.
>>
>> So is no one doing it, or does it have another name, or an equivalent
>> design process with a different name, or what?
>>
>> * "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
>> 1998.
> 
> I'm not clear on all of the details of what defines "test driven
> design", but I believe I've been using that all along.  I've thought of
> this as bottom up development where the lower level code is written
> first *and thoroughly tested* before writing the next level of code.
> 
> How does "test driven design" differ from this significantly?

The big difference in the software world is that the tests are automated 
and never retired.  There are generally test suites to make the mechanics 
of testing easier.  Ideally, whenever you do a build you run the entire 
unit-test suite fresh.  This means that when you tweak some low-level 
function, it still gets tested.

The other big difference, that's hard for one guy to do, is that if 
you're going Full Agile you have one guy writing tests and another guy 
writing "real" code.  Ideally they're equally good, and they switch off.  
The idea is basically that more brains on the problem is better.

If you look at the full description of TDD it looks like it'd be hard, 
slow, and clunky, because the recommendation is to do things at a very 
fine-grained level.  However, I've done it, and the process of adding 
features to a function as you add tests to the bench goes very quickly.  
The actual development of the bottom layer is a bit slower, but when you 
go to put the pieces together they just fall into place.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Article: 160048
Subject: Re: Test Driven Design?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Wed, 17 May 2017 18:32:35 +0100
Links: << >>  << T >>  << A >>
On 17/05/17 16:47, rickman wrote:
> On 5/16/2017 4:21 PM, Tim Wescott wrote:
>> Anyone doing any test driven design for FPGA work?
>>
>> I've gone over to doing it almost universally for C++ development,
>> because It Just Works -- you lengthen the time to integration a bit, but
>> vastly shorten the actual integration time.
>>
>> I did a web search and didn't find it mentioned -- the traditional "make
>> a test bench" is part way there, but as presented in my textbook* doesn't
>> impose a comprehensive suite of tests on each module.
>>
>> So is no one doing it, or does it have another name, or an equivalent
>> design process with a different name, or what?
>>
>> * "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
>> 1998.
>
> I'm not clear on all of the details of what defines "test driven design", but I
> believe I've been using that all along.  I've thought of this as bottom up
> development where the lower level code is written first *and thoroughly tested*
> before writing the next level of code.
>
> How does "test driven design" differ from this significantly?

In many software environments TDD - as it is
taught - more naturally fits top-down design.

That's not necessary, but that's the typical
mentality. TDD can and should be used for
"bottom-up" "integration tests".

The key point, all to often missed, is to /think/
about the benefits and disadvantages of each tool
in your armoury, and use only the most appropriate
combination for your problem at hand.


Article: 160049
Subject: Re: Test Driven Design?
From: rickman <gnuarm@gmail.com>
Date: Wed, 17 May 2017 13:39:55 -0400
Links: << >>  << T >>  << A >>
On 5/17/2017 1:17 PM, Tim Wescott wrote:
> On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:
>
>> On 5/16/2017 4:21 PM, Tim Wescott wrote:
>>> Anyone doing any test driven design for FPGA work?
>>>
>>> I've gone over to doing it almost universally for C++ development,
>>> because It Just Works -- you lengthen the time to integration a bit,
>>> but vastly shorten the actual integration time.
>>>
>>> I did a web search and didn't find it mentioned -- the traditional
>>> "make a test bench" is part way there, but as presented in my textbook*
>>> doesn't impose a comprehensive suite of tests on each module.
>>>
>>> So is no one doing it, or does it have another name, or an equivalent
>>> design process with a different name, or what?
>>>
>>> * "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
>>> 1998.
>>
>> I'm not clear on all of the details of what defines "test driven
>> design", but I believe I've been using that all along.  I've thought of
>> this as bottom up development where the lower level code is written
>> first *and thoroughly tested* before writing the next level of code.
>>
>> How does "test driven design" differ from this significantly?
>
> The big difference in the software world is that the tests are automated
> and never retired.  There are generally test suites to make the mechanics
> of testing easier.  Ideally, whenever you do a build you run the entire
> unit-test suite fresh.  This means that when you tweak some low-level
> function, it still gets tested.
>
> The other big difference, that's hard for one guy to do, is that if
> you're going Full Agile you have one guy writing tests and another guy
> writing "real" code.  Ideally they're equally good, and they switch off.
> The idea is basically that more brains on the problem is better.
>
> If you look at the full description of TDD it looks like it'd be hard,
> slow, and clunky, because the recommendation is to do things at a very
> fine-grained level.  However, I've done it, and the process of adding
> features to a function as you add tests to the bench goes very quickly.
> The actual development of the bottom layer is a bit slower, but when you
> go to put the pieces together they just fall into place.

I guess I'm still not picturing it.  I think the part I don't get is 
"adding features to a function".  To me the features would *be* 
functions that are written, tested and then added to next higher level 
code.  So I assume what you wrote applies to that next higher level.

I program in two languages, Forth and VHDL.  In Forth functions (called 
"words") are written at *very* low levels, often a word is a single line 
of code and nearly all the time no more than five.  Being very small a 
word is much easier to write although the organization can be tough to 
settle on.

In VHDL I typically don't decompose the code into such fine grains.  It 
is easy to write the code for the pieces, registers and logic.  The hard 
part is how they interconnect/interrelate.  Fine decomposition tends to 
obscure that rather than enhancing it.  So I write large blocks of code 
to be tested.   I guess in those cases features would be "added" rather 
than new modules being written for the new functionality.

I still write test benches for each module in VHDL.  Because there is a 
lot more work in writing a using a VHDL test bench than a Forth test 
word this also encourages larger (and fewer) modules.

Needless to say, I don't find much synergy between the two languages.

-- 

Rick C



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMar2019

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search