Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 159725

Article: 159725
Subject: Re: All-real FFT for FPGA
From: Tim Wescott <tim@seemywebsite.com>
Date: Tue, 14 Feb 2017 14:22:33 -0600
Links: << >>  << T >>  << A >>
On Tue, 14 Feb 2017 20:42:12 +0100, Christian Gollwitzer wrote:

> Am 14.02.17 um 20:21 schrieb rickman:
>> I don't know that -(a * b) wouldn't be exactly the same result as (a *
>> -b).
>>
>>
> It is bitexact the same. In IEEE float, the sign is signalled by a
> single bit, it doesn't use two's complement or similar. Therefore, -x
> simply inverts the sign bit, and it doesnt matter if you do this before
> or after the multiplication. The only possible corner cases re special
> values like NaN and Inf; I believe that even then, the result is
> bitexact the same, but I'm too lazy to check the standard.

While the use of floats in FPGAs is getting more and more common, I 
suspect that there are still loads of FFTs getting done in fixed-point.

I would need to do some very tedious searching to know for sure, but I 
suspect that if there's truncation, potential rollover, or 2's compliment 
values of 'b1000...0 involved, then my condition may be violated.

And I don't know how much an optimizer is going to analyze what's going 
on inside of a DSP block -- if you instantiate one in your design and 
feed it all zeros, is the optimizer smart enough to take it out?
 
-- 
Tim Wescott
Control systems, embedded software and circuit design
I'm looking for work!  See my website if you're interested
http://www.wescottdesign.com

Article: 159726
Subject: Re: All-real FFT for FPGA
From: spope33@speedymail.org (Steve Pope)
Date: Tue, 14 Feb 2017 21:56:54 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman  <gnuarm@gmail.com> wrote:

>On 2/14/2017 11:39 AM, Steve Pope wrote:

>> Tim Wescott  <tim@seemywebsite.com> wrote:

>>> On Tue, 14 Feb 2017 06:52:32 +0000, Steve Pope wrote:

>>>> If for example you compute (a * b) and also compute (a * -b),
>>>> the synthesizer is smart enough to know there are not two full
>>>> multipliers needed.

>>> I would be very leery of an optimizer that felt free to optimize things
>>> so that they are no longer bit-exact --

>> Of course, synthesizers need to be bit-exact and conform to the
>> HDL language spec

>>> and for some combinations of
>>> bits, I'm pretty sure that -(a * b) is not necessarily (a * -b).

>> That is not the example I gave, but in either example you still would
>> not need two multipliers, just one multiplier and some small amount of
>> logic; any reasonable synthesizer would not use two multipliers
>> worth of gates.

>How is that not the example you gave?  If the tool is going to use a 
>single multiplier for calculating (a * b) and (a * -b), 

Okay so far

> that implies it calculates (a * b)  and then negates that to get 
> (a * -b) as -(a * b), no?

Not necessarily exactly this.

>I don't know that -(a * b) wouldn't be exactly the same result as (a * 
>-b).

You just answered your own question.

Steve

Article: 159727
Subject: Re: All-real FFT for FPGA
From: rickman <gnuarm@gmail.com>
Date: Tue, 14 Feb 2017 17:20:05 -0500
Links: << >>  << T >>  << A >>
On 2/14/2017 4:56 PM, Steve Pope wrote:
> rickman  <gnuarm@gmail.com> wrote:
>
>> On 2/14/2017 11:39 AM, Steve Pope wrote:
>
>>> Tim Wescott  <tim@seemywebsite.com> wrote:
>
>>>> On Tue, 14 Feb 2017 06:52:32 +0000, Steve Pope wrote:
>
>>>>> If for example you compute (a * b) and also compute (a * -b),
>>>>> the synthesizer is smart enough to know there are not two full
>>>>> multipliers needed.
>
>>>> I would be very leery of an optimizer that felt free to optimize things
>>>> so that they are no longer bit-exact --
>
>>> Of course, synthesizers need to be bit-exact and conform to the
>>> HDL language spec
>
>>>> and for some combinations of
>>>> bits, I'm pretty sure that -(a * b) is not necessarily (a * -b).
>
>>> That is not the example I gave, but in either example you still would
>>> not need two multipliers, just one multiplier and some small amount of
>>> logic; any reasonable synthesizer would not use two multipliers
>>> worth of gates.
>
>> How is that not the example you gave?  If the tool is going to use a
>> single multiplier for calculating (a * b) and (a * -b),
>
> Okay so far
>
>> that implies it calculates (a * b)  and then negates that to get
>> (a * -b) as -(a * b), no?
>
> Not necessarily exactly this.

If not this, then what?  Why are you being so coy?


>> I don't know that -(a * b) wouldn't be exactly the same result as (a *
>> -b).
>
> You just answered your own question.

No, that's a separate question.  In fact, as Chris pointed out the IEEE 
floating point format is sign magnitude.  So it doesn't matter when you 
flip the sign bit, the rest of the number will be the same.  For two's 
complement the only issue would be using the values of max negative int 
where there is no positive number with the same magnitude.  But I can't 
construct a case were that would be a problem for this example.  Still, 
if that is a problem for one combination, then there will be cases that 
fail for the other combination.  Typically this is avoided by not using 
the max negative value regardless of the optimizations.

-- 

Rick C

Article: 159728
Subject: Re: All-real FFT for FPGA
From: rickman <gnuarm@gmail.com>
Date: Tue, 14 Feb 2017 17:24:22 -0500
Links: << >>  << T >>  << A >>
On 2/14/2017 3:22 PM, Tim Wescott wrote:
> On Tue, 14 Feb 2017 20:42:12 +0100, Christian Gollwitzer wrote:
>
>> Am 14.02.17 um 20:21 schrieb rickman:
>>> I don't know that -(a * b) wouldn't be exactly the same result as (a *
>>> -b).
>>>
>>>
>> It is bitexact the same. In IEEE float, the sign is signalled by a
>> single bit, it doesn't use two's complement or similar. Therefore, -x
>> simply inverts the sign bit, and it doesnt matter if you do this before
>> or after the multiplication. The only possible corner cases re special
>> values like NaN and Inf; I believe that even then, the result is
>> bitexact the same, but I'm too lazy to check the standard.
>
> While the use of floats in FPGAs is getting more and more common, I
> suspect that there are still loads of FFTs getting done in fixed-point.
>
> I would need to do some very tedious searching to know for sure, but I
> suspect that if there's truncation, potential rollover, or 2's compliment
> values of 'b1000...0 involved, then my condition may be violated.
>
> And I don't know how much an optimizer is going to analyze what's going
> on inside of a DSP block -- if you instantiate one in your design and
> feed it all zeros, is the optimizer smart enough to take it out?

There is such a thing as "hard" IP which means a block of logic that is 
already mapped, placed and routed.  But they are rare, but not subject 
to any optimizations.  Otherwise yes, if you feed a constant into any 
block of logic synthesis will not only replace that logic with constants 
on the output, it will replace all the registers that may be used to 
improve the throughput of the constants.  There are synthesis commands 
to avoid that, but otherwise the tool does as it sees optimal.

The significant optimizations in an FFT come from recognizing all the 
redundant computations which the optimizer will *not* be able to see as 
they are dispersed across columns and rows or time.  Logic optimizations 
are much more obvious and basic.  Otherwise you could just feed the tool 
a DFT and let it work out the details.  Heck, that might get you some 
significant savings.  In a DFT done all at once, the tool might actually 
spot that you are multiplying a given input sample by the same 
coefficient multiple times.  But that's still not an FFT which is much 
more subtle.

-- 

Rick C

Article: 159729
Subject: Re: All-real FFT for FPGA
From: spope33@speedymail.org (Steve Pope)
Date: Tue, 14 Feb 2017 22:48:01 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman  <gnuarm@gmail.com> wrote:

>On 2/14/2017 4:56 PM, Steve Pope wrote:

>> rickman  <gnuarm@gmail.com> wrote:

>>> On 2/14/2017 11:39 AM, Steve Pope wrote:

>>>> Tim Wescott  <tim@seemywebsite.com> wrote:

>>>>> On Tue, 14 Feb 2017 06:52:32 +0000, Steve Pope wrote:

>>>>>> If for example you compute (a * b) and also compute (a * -b),
>>>>>> the synthesizer is smart enough to know there are not two full
>>>>>> multipliers needed.

Note: I did not say that in an HDL, -(a * b) is equal in a bit-exact
sense to (a * -b).  

>>>>> I'm pretty sure that -(a * b) is not necessarily (a * -b).

We agree

>>> If the tool is going to use a
>>> single multiplier for calculating (a * b) and (a * -b),

>> Okay so far

>>> that implies it calculates (a * b)  and then negates that to get
>>> (a * -b) as -(a * b), no?

>> Not necessarily exactly this.

>If not this, then what?  Why are you being so coy?

Because one would have to look at the HDL language definition,
the declared bit widths and declared data types and possibly other stuff.

But you still wouldn't need two multpliers to compute the two
values in my example; just one multiplier plus some other logic
(much less than the gate count of a second multiplier) to take care of 
the extremal cases to make the output exactly match.

Steve

Article: 159730
Subject: Re: All-real FFT for FPGA
From: rickman <gnuarm@gmail.com>
Date: Tue, 14 Feb 2017 18:23:06 -0500
Links: << >>  << T >>  << A >>
On 2/14/2017 5:48 PM, Steve Pope wrote:
> rickman  <gnuarm@gmail.com> wrote:
>
>> On 2/14/2017 4:56 PM, Steve Pope wrote:
>
>>> rickman  <gnuarm@gmail.com> wrote:
>
>>>> On 2/14/2017 11:39 AM, Steve Pope wrote:
>
>>>>> Tim Wescott  <tim@seemywebsite.com> wrote:
>
>>>>>> On Tue, 14 Feb 2017 06:52:32 +0000, Steve Pope wrote:
>
>>>>>>> If for example you compute (a * b) and also compute (a * -b),
>>>>>>> the synthesizer is smart enough to know there are not two full
>>>>>>> multipliers needed.
>
> Note: I did not say that in an HDL, -(a * b) is equal in a bit-exact
> sense to (a * -b).
>
>>>>>> I'm pretty sure that -(a * b) is not necessarily (a * -b).
>
> We agree
>
>>>> If the tool is going to use a
>>>> single multiplier for calculating (a * b) and (a * -b),
>
>>> Okay so far
>
>>>> that implies it calculates (a * b)  and then negates that to get
>>>> (a * -b) as -(a * b), no?
>
>>> Not necessarily exactly this.
>
>> If not this, then what?  Why are you being so coy?
>
> Because one would have to look at the HDL language definition,
> the declared bit widths and declared data types and possibly other stuff.
>
> But you still wouldn't need two multpliers to compute the two
> values in my example; just one multiplier plus some other logic
> (much less than the gate count of a second multiplier) to take care of
> the extremal cases to make the output exactly match.

It was a *long* way around the woods to get here.  I seriously doubt any 
logic synthesis tool is going to substitute two multipliers and an adder 
with a single multiplier and a bunch of other logic.  Have you seen a 
tool that was that smart?  If so, that is a pretty advanced tool.

BTW, what *are* the extremal[sic] cases?

-- 

Rick C

Article: 159731
Subject: Re: All-real FFT for FPGA
From: spope33@speedymail.org (Steve Pope)
Date: Wed, 15 Feb 2017 00:35:35 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman  <gnuarm@gmail.com> wrote:

>On 2/14/2017 5:48 PM, Steve Pope wrote:

>> But you still wouldn't need two multpliers to compute the two
>> values in my example; just one multiplier plus some other logic
>> (much less than the gate count of a second multiplier) to take care of
>> the extremal cases to make the output exactly match.

>It was a *long* way around the woods to get here.  I seriously doubt any 
>logic synthesis tool is going to substitute two multipliers and an adder 
>with a single multiplier and a bunch of other logic.  

I very strongly disagree ... minimizing purely combinatorial logic
is the _sine_qua_non_ of a synthesis tool.  

Lots of other things synthesizers try to do are more intricate and less
predictable, but not this.

Steve

Article: 159732
Subject: Re: All-real FFT for FPGA
From: rickman <gnuarm@gmail.com>
Date: Tue, 14 Feb 2017 21:54:49 -0500
Links: << >>  << T >>  << A >>
On 2/14/2017 7:35 PM, Steve Pope wrote:
> rickman  <gnuarm@gmail.com> wrote:
>
>> On 2/14/2017 5:48 PM, Steve Pope wrote:
>
>>> But you still wouldn't need two multpliers to compute the two
>>> values in my example; just one multiplier plus some other logic
>>> (much less than the gate count of a second multiplier) to take care of
>>> the extremal cases to make the output exactly match.
>
>> It was a *long* way around the woods to get here.  I seriously doubt any
>> logic synthesis tool is going to substitute two multipliers and an adder
>> with a single multiplier and a bunch of other logic.
>
> I very strongly disagree ... minimizing purely combinatorial logic
> is the _sine_qua_non_ of a synthesis tool.
>
> Lots of other things synthesizers try to do are more intricate and less
> predictable, but not this.

I hear you, but I don't think the tools will be able to "see" the 
simplifications as they are not strictly logic simplifications.  Most of 
it is trig.

Take a look at just how the FFT works.  The combinations of 
multiplications work out because of properties of the sine function, not 
because of Boolean logic.

-- 

Rick C

Article: 159733
Subject: Intel (Altera) announces Cyclone-10
From: GaborSzakacs <gabor@alacron.com>
Date: Thu, 16 Feb 2017 11:52:42 -0500
Links: << >>  << T >>  << A >>

It looks like Intel has learned to count from Microsoft.  The previous
generation of Cyclone was Cyclone-5.


https://www.altera.com/products/fpga/cyclone-series/cyclone-10.html

-- 
Gabor

Article: 159734
Subject: Re: Intel (Altera) announces Cyclone-10
From: rickman <gnuarm@gmail.com>
Date: Thu, 16 Feb 2017 14:10:01 -0500
Links: << >>  << T >>  << A >>
On 2/16/2017 11:52 AM, GaborSzakacs wrote:
>
> It looks like Intel has learned to count from Microsoft.  The previous
> generation of Cyclone was Cyclone-5.
>
>
> https://www.altera.com/products/fpga/cyclone-series/cyclone-10.html

Maybe they doubled the number because they're twice as good?

-- 

Rick C

Article: 159735
Subject: Re: Intel (Altera) announces Cyclone-10
From: rickman <gnuarm@gmail.com>
Date: Thu, 16 Feb 2017 14:37:05 -0500
Links: << >>  << T >>  << A >>
On 2/16/2017 2:10 PM, rickman wrote:
> On 2/16/2017 11:52 AM, GaborSzakacs wrote:
>>
>> It looks like Intel has learned to count from Microsoft.  The previous
>> generation of Cyclone was Cyclone-5.
>>
>>
>> https://www.altera.com/products/fpga/cyclone-series/cyclone-10.html
>
> Maybe they doubled the number because they're twice as good?

Cyclone 10 GX

"Twice higher performance than the previous generation of low cost FPGAs"

tee hee

Although it is interesting to note the GX (high performance) series has 
8 input ALMs, 20 kb memory blocks, 27x27 bit multipliers, floating point 
multipliers, coefficient register banks, all in a 20 nm process while 
the LP series has 4 input LUTs, 9 kb memory blocks, 18x18 bit 
multipliers, no floating point or coefficient register banks and no 
statement of process.  It would appear that to achieve low(er) power 
they opted for an older process, leveraging existing series of FPGAs for 
the LP series.  Like a Cyclone V redo.

It would be interesting to see what sort of stack CPU could be made with 
the GX series.  I wonder if the design software is out yet?

-- 

Rick C

Article: 159736
Subject: Re: Intel (Altera) announces Cyclone-10
From: rickman <gnuarm@gmail.com>
Date: Thu, 16 Feb 2017 15:15:00 -0500
Links: << >>  << T >>  << A >>
On 2/16/2017 2:37 PM, rickman wrote:
> On 2/16/2017 2:10 PM, rickman wrote:
>> On 2/16/2017 11:52 AM, GaborSzakacs wrote:
>>>
>>> It looks like Intel has learned to count from Microsoft.  The previous
>>> generation of Cyclone was Cyclone-5.
>>>
>>>
>>> https://www.altera.com/products/fpga/cyclone-series/cyclone-10.html
>>
>> Maybe they doubled the number because they're twice as good?
>
> Cyclone 10 GX
>
> "Twice higher performance than the previous generation of low cost FPGAs"
>
> tee hee
>
> Although it is interesting to note the GX (high performance) series has
> 8 input ALMs, 20 kb memory blocks, 27x27 bit multipliers, floating point
> multipliers, coefficient register banks, all in a 20 nm process while
> the LP series has 4 input LUTs, 9 kb memory blocks, 18x18 bit
> multipliers, no floating point or coefficient register banks and no
> statement of process.  It would appear that to achieve low(er) power
> they opted for an older process, leveraging existing series of FPGAs for
> the LP series.  Like a Cyclone V redo.
>
> It would be interesting to see what sort of stack CPU could be made with
> the GX series.  I wonder if the design software is out yet?

Looks to me like there is no support for these devices as yet.

They mention an M164 package which seems to be a type of BGA, but I 
can't find it in their package data sheet... or more accurately I can't 
find their package data sheet.  I keep finding package info on "mature" 
devices or other obsolete sheets.  I have a copy from 2007 which shows 
MBGA packages with 0.5 mm ball spacing, but the 164 pin part is not 
there.  The really weird part is the package data sheets I can find list 
updates to add the M164 part, but it is nowhere to be found in the 
technical data.   I guess they just copied the update table when they 
made the "mature device" data sheet.  Even that is dated 2011.  WTF!?

-- 

Rick C

Article: 159737
Subject: Re: Intel (Altera) announces Cyclone-10
From: thomas.entner99@gmail.com
Date: Thu, 16 Feb 2017 12:16:52 -0800 (PST)
Links: << >>  << T >>  << A >>

> >> It looks like Intel has learned to count from Microsoft.  The previous
> >> generation of Cyclone was Cyclone-5.

I think the numbering is the least concern with this "new" family (no surpr=
ise, as there is already Max 10, Arria 10 and Stratix 10 - with similar jum=
ps).

However, it is pretty obvious that:
Cyclone 10 LP =3D Cyclone III / IV E
Cyclone 10 GX =3D Arria 10 GX

(Such a strategy has long tradition for Altera, look at FLEX10K/ACEX1K, Cyc=
lone III/IV E, MAX II/V...)

It is mainly a marketing / pricing move, which of course is OK if there is =
pricing benefit for the customer. But I always found the way it was communi=
cated pretty misleading... (Very dishonest. Fake news.) (I had no contact t=
o an Altera/Intel FAE recently - not sure how they communicate this.)

However, especially Arria 10 GX for Cyclone pricing could be a real deal. (=
Great deal. So wonderful.)

The main question (for many applications) is if Cyclone 10 GX has a lower p=
ower consumption than Arria 10 Gx - I doubt.

Regards

Thomas (sorry, couldn't resist...)

Article: 159738
Subject: cmos delay vs temperature
From: John Larkin <jjlarkinxyxy@highlandtechnology.com>
Date: Thu, 16 Feb 2017 12:19:09 -0800
Links: << >>  << T >>  << A >>
I found one old Fairchild appnote that has some numbers

https://dl.dropboxusercontent.com/u/53724080/Parts/Logic/CMOS_Delay_Temp.pdf

which averages to around +3000 ppm/degC, or about +3 ps per ns of prop
delay per degree C. That's with 50 pF loading, sorta high.

This is HC, pretty old technology. 

I have a vague impression that the innards of a typical FPGA may be
better. Here's a ring oscillator inside an Altera FPGA, which looks
close to +1000 PPM/degC delay tempco. But that's deep inside, probably
CLB and not interconnect limited, and i/o cells may be different.

ECL is much better, generally way under 1000 PPM.

Do any semiconductor jocks have any comments on cmos tempco? 

Do any of the FPGA design tools report timing tempcos? I don't drive
those tools myself.

I suppose one could tweak Vcc vs temp to null out a native tempco.



-- 

John Larkin         Highland Technology, Inc
picosecond timing   precision measurement 

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com


Article: 159739
Subject: Re: Intel (Altera) announces Cyclone-10
From: GaborSzakacs <gabor@alacron.com>
Date: Thu, 16 Feb 2017 16:00:40 -0500
Links: << >>  << T >>  << A >>
rickman wrote:
> On 2/16/2017 2:37 PM, rickman wrote:
>> On 2/16/2017 2:10 PM, rickman wrote:
>>> On 2/16/2017 11:52 AM, GaborSzakacs wrote:
>>>>
>>>> It looks like Intel has learned to count from Microsoft.  The previous
>>>> generation of Cyclone was Cyclone-5.
>>>>
>>>>
>>>> https://www.altera.com/products/fpga/cyclone-series/cyclone-10.html
>>>
>>> Maybe they doubled the number because they're twice as good?
>>
>> Cyclone 10 GX
>>
>> "Twice higher performance than the previous generation of low cost FPGAs"
>>
>> tee hee
>>
>> Although it is interesting to note the GX (high performance) series has
>> 8 input ALMs, 20 kb memory blocks, 27x27 bit multipliers, floating point
>> multipliers, coefficient register banks, all in a 20 nm process while
>> the LP series has 4 input LUTs, 9 kb memory blocks, 18x18 bit
>> multipliers, no floating point or coefficient register banks and no
>> statement of process.  It would appear that to achieve low(er) power
>> they opted for an older process, leveraging existing series of FPGAs for
>> the LP series.  Like a Cyclone V redo.
>>
>> It would be interesting to see what sort of stack CPU could be made with
>> the GX series.  I wonder if the design software is out yet?
> 
> Looks to me like there is no support for these devices as yet.
> 
> They mention an M164 package which seems to be a type of BGA, but I 
> can't find it in their package data sheet... or more accurately I can't 
> find their package data sheet.  I keep finding package info on "mature" 
> devices or other obsolete sheets.  I have a copy from 2007 which shows 
> MBGA packages with 0.5 mm ball spacing, but the 164 pin part is not 
> there.  The really weird part is the package data sheets I can find list 
> updates to add the M164 part, but it is nowhere to be found in the 
> technical data.   I guess they just copied the update table when they 
> made the "mature device" data sheet.  Even that is dated 2011.  WTF!?
> 

As far as I can tell this is (very) advanced information.  I have to
wonder if the announcement was timed to take some wind out of the
sails of the MicroSemi "PolarFire" announcement:

https://www.microsemi.com/products/fpga-soc/fpga/polarfire-fpga

At the moment, both the Altera and MicroSemi offerings seem to be 
unobtainium...

-- 
Gabor

Article: 159740
Subject: Re: cmos delay vs temperature
From: rickman <gnuarm@gmail.com>
Date: Fri, 17 Feb 2017 05:06:16 -0500
Links: << >>  << T >>  << A >>
On 2/16/2017 3:19 PM, John Larkin wrote:
> I found one old Fairchild appnote that has some numbers
>
> https://dl.dropboxusercontent.com/u/53724080/Parts/Logic/CMOS_Delay_Temp.pdf
>
> which averages to around +3000 ppm/degC, or about +3 ps per ns of prop
> delay per degree C. That's with 50 pF loading, sorta high.
>
> This is HC, pretty old technology.
>
> I have a vague impression that the innards of a typical FPGA may be
> better. Here's a ring oscillator inside an Altera FPGA, which looks
> close to +1000 PPM/degC delay tempco. But that's deep inside, probably
> CLB and not interconnect limited, and i/o cells may be different.
>
> ECL is much better, generally way under 1000 PPM.
>
> Do any semiconductor jocks have any comments on cmos tempco?
>
> Do any of the FPGA design tools report timing tempcos? I don't drive
> those tools myself.
>
> I suppose one could tweak Vcc vs temp to null out a native tempco.

I've seen timing analysis tools that will evaluate the design at high 
temp, low temp or typical, but I've never seen them offer tempcos.  If 
you analyze your design at high and low temps it would be easy enough to 
calculate of course.  Analyze it at typ temp just to make sure it's 
linear.  But this may not be what you want.  These are not real numbers. 
  They are worst case production run numbers.  I have no idea how they 
will compare to real world numbers.

I know the timing analysis tools are not always accurate.  15 years ago 
Altera had moved on to Quartus for new work and MAX+II was only used for 
existing designs on previous generation chips.  Their delay calculations 
for heavily loaded routes was not accurate and our designs would fail 
when the part warmed up.  Quartus didn't support the chips then, so we 
had to shotgun it by routing some 10 to 20 runs a night and then testing 
them the next day with a chip heater.

The delay isn't all silicon, so I don't know how it would be calculated 
over temp.  What happens to the R and the C of metal runs on a chip with 
temperature?  Is that significant?  The actual delay is.   Or that may 
be the Si switches used to interconnect the routes.   Don't know. 
That's kinda the point of digital techniques.  Deal with the pesky 
analog effects to get them out of the picture so we can focus on the 
complicated stuff.

-- 

Rick C

Article: 159741
Subject: Re: Intel (Altera) announces Cyclone-10
From: already5chosen@yahoo.com
Date: Fri, 17 Feb 2017 02:21:12 -0800 (PST)
Links: << >>  << T >>  << A >>
On Thursday, February 16, 2017 at 10:16:54 PM UTC+2, thomas....@gmail.com w=
rote:
> > >> It looks like Intel has learned to count from Microsoft.  The previo=
us
> > >> generation of Cyclone was Cyclone-5.
>=20
> I think the numbering is the least concern with this "new" family (no sur=
prise, as there is already Max 10, Arria 10 and Stratix 10 - with similar j=
umps).
>=20
> However, it is pretty obvious that:
> Cyclone 10 LP =3D Cyclone III / IV E

So, if 10LP is renamed IV E, which in turn is renamed III, does it follow t=
hat 10LP is manufactured on TSMC 60 nm processs ?=20

> Cyclone 10 GX =3D Arria 10 GX

Including the two smallest ones?=20
Hopefully, you are too pessimistic about it.
If 10CX085 and 10CX105 are in reality just 10AX027 with majority of die fus=
ed off then its ratio of performance to static power consumption will be qu=
it bad.
It happened to smaller members of Arria-II family and it was not nice.

>=20
> (Such a strategy has long tradition for Altera, look at FLEX10K/ACEX1K, C=
yclone III/IV E, MAX II/V...)
>=20
> It is mainly a marketing / pricing move, which of course is OK if there i=
s pricing benefit for the customer. But I always found the way it was commu=
nicated pretty misleading... (Very dishonest. Fake news.) (I had no contact=
 to an Altera/Intel FAE recently - not sure how they communicate this.)
>=20
> However, especially Arria 10 GX for Cyclone pricing could be a real deal.=
 (Great deal. So wonderful.)
>=20
> The main question (for many applications) is if Cyclone 10 GX has a lower=
 power consumption than Arria 10 Gx - I doubt.
>=20
> Regards
>=20
> Thomas (sorry, couldn't resist...)


Article: 159742
Subject: Re: Intel (Altera) announces Cyclone-10
From: rickman <gnuarm@gmail.com>
Date: Fri, 17 Feb 2017 05:29:28 -0500
Links: << >>  << T >>  << A >>
On 2/17/2017 5:21 AM, already5chosen@yahoo.com wrote:
> On Thursday, February 16, 2017 at 10:16:54 PM UTC+2, thomas....@gmail.com wrote:
>>>>> It looks like Intel has learned to count from Microsoft.  The previous
>>>>> generation of Cyclone was Cyclone-5.
>>
>> I think the numbering is the least concern with this "new" family (no surprise, as there is already Max 10, Arria 10 and Stratix 10 - with similar jumps).
>>
>> However, it is pretty obvious that:
>> Cyclone 10 LP = Cyclone III / IV E
>
> So, if 10LP is renamed IV E, which in turn is renamed III, does it follow that 10LP is manufactured on TSMC 60 nm processs ?
>
>> Cyclone 10 GX = Arria 10 GX
>
> Including the two smallest ones?
> Hopefully, you are too pessimistic about it.
> If 10CX085 and 10CX105 are in reality just 10AX027 with majority of die fused off then its ratio of performance to static power consumption will be quit bad.
> It happened to smaller members of Arria-II family and it was not nice.

I wonder how long it will be before Altera transitions over to Intel 
fabs and/or if that will be an improvement or not.

It's interesting to me that the low end of the Cyclone 10 LP is just 6 
kLUTs.  That's my territory.  No pricing yet and the packaging is still 
pretty bad for low end work.  The choices are huge BGAs, a huge TQFP and 
smaller BGA that requires very fine artwork on the PCB which means no 
low cost PCB processes.  I guess I could just use every other pin or 
something.

-- 

Rick C

Article: 159743
Subject: Re: Intel (Altera) announces Cyclone-10
From: already5chosen@yahoo.com
Date: Fri, 17 Feb 2017 04:19:41 -0800 (PST)
Links: << >>  << T >>  << A >>
On Friday, February 17, 2017 at 12:29:39 PM UTC+2, rickman wrote:
> On 2/17/2017 5:21 AM, already5chosen@yahoo.com wrote:
> > On Thursday, February 16, 2017 at 10:16:54 PM UTC+2, thomas....@gmail.com wrote:
> >>>>> It looks like Intel has learned to count from Microsoft.  The previous
> >>>>> generation of Cyclone was Cyclone-5.
> >>
> >> I think the numbering is the least concern with this "new" family (no surprise, as there is already Max 10, Arria 10 and Stratix 10 - with similar jumps).
> >>
> >> However, it is pretty obvious that:
> >> Cyclone 10 LP = Cyclone III / IV E
> >
> > So, if 10LP is renamed IV E, which in turn is renamed III, does it follow that 10LP is manufactured on TSMC 60 nm processs ?
> >
> >> Cyclone 10 GX = Arria 10 GX
> >
> > Including the two smallest ones?
> > Hopefully, you are too pessimistic about it.
> > If 10CX085 and 10CX105 are in reality just 10AX027 with majority of die fused off then its ratio of performance to static power consumption will be quit bad.
> > It happened to smaller members of Arria-II family and it was not nice.
> 
> I wonder how long it will be before Altera transitions over to Intel 
> fabs and/or if that will be an improvement or not.

According to my understanding, official line is the same as before acquisition:
only high end (Stratix 10) will be manufactured at Intel's fabs. The rest remains on TSMC.
But I didn't follow the news too closely.

> 
> It's interesting to me that the low end of the Cyclone 10 LP is just 6 
> kLUTs.  That's my territory. 

I can not judge for sure, but it seems to me that "your territory" is MAX-10.

> No pricing yet and the packaging is still 
> pretty bad for low end work.  The choices are huge BGAs, a huge TQFP and 
> smaller BGA that requires very fine artwork on the PCB which means no 
> low cost PCB processes.  I guess I could just use every other pin or 
> something.
> 
> -- 
> 
> Rick C


Article: 159744
Subject: Re: cmos delay vs temperature
From: dalai lamah <antonio12358@hotmail.com>
Date: Fri, 17 Feb 2017 13:59:36 +0100
Links: << >>  << T >>  << A >>
Un bel giorno John Larkin digiṭ:

> Do any of the FPGA design tools report timing tempcos? I don't drive
> those tools myself.

Until some time ago, most FPGA timing analysis tools used the worst-case
parameters from the datasheets, that should have been characterized pretty
well. In fact, it was completely normal that a real design in a lab
environment performed a lot better than the timing simulation would
suggest.

However I'm not aware of the "last" (5-10 years) evolutions of the design
tools.

-- 
Fletto i muscoli e sono nel vuoto.

Article: 159745
Subject: Re: cmos delay vs temperature
From: thomas.entner99@gmail.com
Date: Fri, 17 Feb 2017 07:06:17 -0800 (PST)
Links: << >>  << T >>  << A >>

> I suppose one could tweak Vcc vs temp to null out a native tempco.
>=20

I am not an expert in this field either, but to my knowledge, things got mu=
ch more complex with the smaller process geometries.

Generally things will still become faster at higher supply voltage and lowe=
r temperature, but I had also designs which (according to the timing analyz=
er) passed at 85=C2=B0C but failed at 0=C2=B0C.

Regards,

Thomas

www.entner-electronics.com - Home of EEBlaster and JPEG CODEC

Article: 159746
Subject: Re: Intel (Altera) announces Cyclone-10
From: thomas.entner99@gmail.com
Date: Fri, 17 Feb 2017 07:25:39 -0800 (PST)
Links: << >>  << T >>  << A >>
>=20
> So, if 10LP is renamed IV E, which in turn is renamed III, does it follow=
 that 10LP is manufactured on TSMC 60 nm processs ?=20

This is indeed the case, it is even somewhere on their homepage, google for=
 Cyclone 10 and 60nm... (Maybe they use a different flavour of 60nm process=
 with better characteristics, but I do not really think so. Cyclone IV was =
at least the shrinked from 65nm to 60nm, but this was also done for Cyclone=
 III). Interestingly, on the TSMC homepage, there is no 60nm process, only =
65nm and 55nm..)

> > Cyclone 10 GX =3D Arria 10 GX
>=20
> Including the two smallest ones?=20
> Hopefully, you are too pessimistic about it.
> If 10CX085 and 10CX105 are in reality just 10AX027 with majority of die f=
used off then its ratio of performance to static power consumption will be =
quit bad.

I fully agree on this. It they had a smaller die (with reduced power consum=
ption) and Cyclone pricing, this would be a real good product... But I doub=
t it. Another interesting questions is if there also comes a Cyclone 10 SX =
with SoC?

I think this is mainly a marketing thing to have something against Spartan =
7, until the real new stuff is ready.

Regards,

Thomas

www.entner-electronics.com - Home of EEBlaster and JPEG Codec

Article: 159747
Subject: Re: Intel (Altera) announces Cyclone-10
From: rickman <gnuarm@gmail.com>
Date: Fri, 17 Feb 2017 15:26:34 -0500
Links: << >>  << T >>  << A >>
On 2/17/2017 7:19 AM, already5chosen@yahoo.com wrote:
> On Friday, February 17, 2017 at 12:29:39 PM UTC+2, rickman wrote:
>> On 2/17/2017 5:21 AM, already5chosen@yahoo.com wrote:
>>> On Thursday, February 16, 2017 at 10:16:54 PM UTC+2, thomas....@gmail.com wrote:
>>>>>>> It looks like Intel has learned to count from Microsoft.  The previous
>>>>>>> generation of Cyclone was Cyclone-5.
>>>>
>>>> I think the numbering is the least concern with this "new" family (no surprise, as there is already Max 10, Arria 10 and Stratix 10 - with similar jumps).
>>>>
>>>> However, it is pretty obvious that:
>>>> Cyclone 10 LP = Cyclone III / IV E
>>>
>>> So, if 10LP is renamed IV E, which in turn is renamed III, does it follow that 10LP is manufactured on TSMC 60 nm processs ?
>>>
>>>> Cyclone 10 GX = Arria 10 GX
>>>
>>> Including the two smallest ones?
>>> Hopefully, you are too pessimistic about it.
>>> If 10CX085 and 10CX105 are in reality just 10AX027 with majority of die fused off then its ratio of performance to static power consumption will be quit bad.
>>> It happened to smaller members of Arria-II family and it was not nice.
>>
>> I wonder how long it will be before Altera transitions over to Intel
>> fabs and/or if that will be an improvement or not.
>
> According to my understanding, official line is the same as before acquisition:
> only high end (Stratix 10) will be manufactured at Intel's fabs. The rest remains on TSMC.
> But I didn't follow the news too closely.
>
>>
>> It's interesting to me that the low end of the Cyclone 10 LP is just 6
>> kLUTs.  That's my territory.
>
> I can not judge for sure, but it seems to me that "your territory" is MAX-10.

I didn't realize MAX10 had ADC on chip as well as multipliers and 
memory.  That's interesting.  I can bring in lowish resolution signals 
and do signal processing on them.  16 bit ADC/DAC would be nicer.  They 
still give me packaging heartburn.  Even in these small parts they 
emphasize high I/O counts and fine pitch packages, *very* fine pitch.

-- 

Rick C

Article: 159748
Subject: Re: Intel (Altera) announces Cyclone-10
From: already5chosen@yahoo.com
Date: Sat, 18 Feb 2017 11:24:20 -0800 (PST)
Links: << >>  << T >>  << A >>
On Friday, February 17, 2017 at 5:25:47 PM UTC+2, thomas....@gmail.com wrote:
> > 
> > So, if 10LP is renamed IV E, which in turn is renamed III, does it follow that 10LP is manufactured on TSMC 60 nm processs ? 
> 
> This is indeed the case, it is even somewhere on their homepage, 

Yes, it's here now.
https://www.altera.com/products/fpga/cyclone-series/cyclone-10/cyclone-10-lp/overview.html

I don't think it was here 4-5 days ago, when I first heard about Cyclone-10.
But may be I just didn't pay attention.

> google for Cyclone 10 and 60nm... (Maybe they use a different flavour of 60nm process with better characteristics, but I do not really think so. Cyclone IV was at least the shrinked from 65nm to 60nm, but this was also done for Cyclone III).

If it was really a shrink.
60nm can well be just a name for the variant of 65nm process that improved some characteristics, but not necessarily a density.

> Interestingly, on the TSMC homepage, there is no 60nm process, only 65nm and 55nm..)
> 
> > > Cyclone 10 GX = Arria 10 GX
> > 
> > Including the two smallest ones? 
> > Hopefully, you are too pessimistic about it.
> > If 10CX085 and 10CX105 are in reality just 10AX027 with majority of die fused off then its ratio of performance to static power consumption will be quit bad.
> 
> I fully agree on this. It they had a smaller die (with reduced power consumption) and Cyclone pricing, this would be a real good product... But I doubt it. Another interesting questions is if there also comes a Cyclone 10 SX with SoC?
> 
> I think this is mainly a marketing thing to have something against Spartan 7, until the real new stuff is ready.
> 
> Regards,
> 
> Thomas
> 
> www.entner-electronics.com - Home of EEBlaster and JPEG Codec


Article: 159749
Subject: designing a fpga
From: kristoff <kristoff@skypro.be>
Date: Fri, 24 Feb 2017 08:32:22 +0100
Links: << >>  << T >>  << A >>
Hi all,

A couple of weeks ago, I was watching the talk of Wolf Clifford on his 
opensource fpga flow at ccc.
(https://www.youtube.com/watch?v=SOn0g3k0FlE)


At the end, he mentions designing an open-source fpga and the replies he 
got when he mentioned the idea to hardware-companies. Appart from the 
question about the usefullness or economic viability of the idea itself 
(1), it did get me thinking.


Question, can I conclude from his remark that -if a hardware companie 
would start out with designing a fpga- that the problem is more the 
"software" side of things than the actual hardware design of the chip.


Or is this conclussion a bit to easy?



Cheerio! Kr. Bonne.





Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search