Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMar2019

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 159350

Article: 159350
Subject: Re: CORDIC in a land of built-in multipliers
From: Evgeny Filatov <filatov.ev@mipt.ru>
Date: Fri, 14 Oct 2016 18:27:27 +0300
Links: << >>  << T >>  << A >>
On 14/10/2016 06:35, Rafael Deliano wrote:

(snip)

> But who knows after all these years what it is and
> how to implement it ?
> Same for Logarithmic Number Systems ( LNS ).

Why, logarithmic number systems are commonly used e.g. in the log-APP 
algorithm, as a part of turbo-decoders.

Evgeny


Article: 159351
Subject: Re: CORDIC in a land of built-in multipliers
From: spope33@speedymail.org (Steve Pope)
Date: Fri, 14 Oct 2016 16:27:01 +0000 (UTC)
Links: << >>  << T >>  << A >>
Evgeny Filatov  <filatov.ev@mipt.ru> wrote:

>On 14/10/2016 06:35, Rafael Deliano wrote:

>> But who knows after all these years what it is and
>> how to implement it ?
>> Same for Logarithmic Number Systems ( LNS ).

>Why, logarithmic number systems are commonly used e.g. in the log-APP 
>algorithm, as a part of turbo-decoders.

Yes, well, some turbo decoders and nearly all Viterbi decoders
operate using log-likelyhood ratios (metrics) to represent the probability
and measure values needed within the decoder.  But not all, and it
is instructive to implement these decoders using measures instead
of metrics.  Sometimes, in application the best implementation
uses measures.

The idea that turbo (or LDPC) decoders are welded at the hip to
log domain representations is a narrow point of view .. IMO

S.

Article: 159352
Subject: Re: CORDIC in a land of built-in multipliers
From: Tim Wescott <seemywebsite@myfooter.really>
Date: Fri, 14 Oct 2016 12:37:28 -0500
Links: << >>  << T >>  << A >>
On Thu, 13 Oct 2016 21:33:49 -0400, rickman wrote:

> On 10/13/2016 6:16 PM, Tim Wescott wrote:
>> On Thu, 13 Oct 2016 18:14:58 -0400, rickman wrote:
>>
>>> On 10/13/2016 5:10 PM, Tim Wescott wrote:
>>>> On Thu, 13 Oct 2016 20:59:49 +0000, Rob Gaddi wrote:
>>>>
>>>>> Tim Wescott wrote:
>>>>>
>>>>>> On Thu, 13 Oct 2016 13:46:18 -0500, Tim Wescott wrote:
>>>>>>
>>>>>>> Now that FPGAs have built-in DSP blocks, how commonly is CORDIC
>>>>>>> used? Is it still a necessary go-to for anyone contemplating doing
>>>>>>> DSP in an FPGA,
>>>>>>> or is it starting to ease onto the off-ramp of history?
>>>>>>>
>>>>>>> And, putting FPGA use aside -- how common is it is ASICs?
>>>>>>
>>>>>> Being bad because I'm cross-posting.  Being bad because I'm
>>>>>> cross-posting a reply to _my own post_.
>>>>>>
>>>>>> Oh well -- I'm thinking that maybe some of the folks on comp.dsp
>>>>>> who aren't also on comp.arch.fpga will have some thoughts.
>>>>>>
>>>>>>
>>>>> I've considered it many times, but never used it.  Then again it's
>>>>> not like I use the DSP blocks for CORDIC sorts of things anyhow; I
>>>>> just throw a RAM lookup table at the problem.
>>>>
>>>> That was the other thing that I should have mentioned.
>>>>
>>>> I've heard a lot of talk about CORDIC, but it seems to be one of
>>>> those things that was Really Critically Important back when an Intel
>>>> 4004 cost (reputedly) $5 in then-dollars, but maybe isn't so
>>>> important now,
>>>> when it seems like the package costs more than the silicon inside it.
>>>
>>> There are still FPGAs at the lower end that don't include multipliers.
>>> I have an older design in a small FPGA I am still shipping that does
>>> the iterative adding thing.  It has been a while since I considered
>>> using the CORDIC algorithm.  What exactly is the advantage of the
>>> CORDIC algorithm again?
>>
>> It uses less gates.  Which is why I'm asking my question -- gates seem
>> to be a lot cheaper these days, so do people still use CORDIC?
> 
> Can you explain that briefly?  Why is it less gates than an adder?
> Adders are pretty durn simple.  I thought the CORDIC algorithm used an
> ADD.

Please forgive me for cutting this short:

Tim: "do you use CORDIC?"

Rick:  "No"

Tim: "thank you"

If you want more information about how it's used, please don't ask me -- 
I'm asking YOU!!

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!

Article: 159353
Subject: Re: CORDIC in a land of built-in multipliers
From: Rob Doyle <radioengr@gmail.com>
Date: Fri, 14 Oct 2016 11:18:38 -0700
Links: << >>  << T >>  << A >>
On 10/14/2016 7:40 AM, Steve Pope wrote:
> Cecil Bayona  <cbayona@cbayona.com> wrote:
>
>> Cordic is used for items like sine, cosine, tangent, square root,etc
>> with involves multiplication and some division, but the Cordic
>> Algorithms eliminate multiply and divide so it simplifies the logic by
>> quite a bit.
>
> I've been designing DSP ASIC's since the 1980's and have never
> chosen to use Cordic.  I can envision where Cordic might make sense --
> where the precision requirements are not really known or are
> very high, Cordic will win over LUT-based designs but most typically
> the LUT-based designs are a simpler way to meet the requirement.
>
> It's also possible that in some PLD-based designs Cordic might
> be the way to go.
>
> Steve
>

I've used a CORDIC to compute the magnitude of a complex signal.

I found it easier than calculating the square root.

Rob.

Article: 159354
Subject: Re: CORDIC in a land of built-in multipliers
From: rickman <gnuarm@gmail.com>
Date: Fri, 14 Oct 2016 15:07:10 -0400
Links: << >>  << T >>  << A >>
On 10/14/2016 1:37 PM, Tim Wescott wrote:
> On Thu, 13 Oct 2016 21:33:49 -0400, rickman wrote:
>
>> On 10/13/2016 6:16 PM, Tim Wescott wrote:
>>> On Thu, 13 Oct 2016 18:14:58 -0400, rickman wrote:
>>>
>>>> On 10/13/2016 5:10 PM, Tim Wescott wrote:
>>>>> On Thu, 13 Oct 2016 20:59:49 +0000, Rob Gaddi wrote:
>>>>>
>>>>>> Tim Wescott wrote:
>>>>>>
>>>>>>> On Thu, 13 Oct 2016 13:46:18 -0500, Tim Wescott wrote:
>>>>>>>
>>>>>>>> Now that FPGAs have built-in DSP blocks, how commonly is CORDIC
>>>>>>>> used? Is it still a necessary go-to for anyone contemplating doing
>>>>>>>> DSP in an FPGA,
>>>>>>>> or is it starting to ease onto the off-ramp of history?
>>>>>>>>
>>>>>>>> And, putting FPGA use aside -- how common is it is ASICs?
>>>>>>>
>>>>>>> Being bad because I'm cross-posting.  Being bad because I'm
>>>>>>> cross-posting a reply to _my own post_.
>>>>>>>
>>>>>>> Oh well -- I'm thinking that maybe some of the folks on comp.dsp
>>>>>>> who aren't also on comp.arch.fpga will have some thoughts.
>>>>>>>
>>>>>>>
>>>>>> I've considered it many times, but never used it.  Then again it's
>>>>>> not like I use the DSP blocks for CORDIC sorts of things anyhow; I
>>>>>> just throw a RAM lookup table at the problem.
>>>>>
>>>>> That was the other thing that I should have mentioned.
>>>>>
>>>>> I've heard a lot of talk about CORDIC, but it seems to be one of
>>>>> those things that was Really Critically Important back when an Intel
>>>>> 4004 cost (reputedly) $5 in then-dollars, but maybe isn't so
>>>>> important now,
>>>>> when it seems like the package costs more than the silicon inside it.
>>>>
>>>> There are still FPGAs at the lower end that don't include multipliers.
>>>> I have an older design in a small FPGA I am still shipping that does
>>>> the iterative adding thing.  It has been a while since I considered
>>>> using the CORDIC algorithm.  What exactly is the advantage of the
>>>> CORDIC algorithm again?
>>>
>>> It uses less gates.  Which is why I'm asking my question -- gates seem
>>> to be a lot cheaper these days, so do people still use CORDIC?
>>
>> Can you explain that briefly?  Why is it less gates than an adder?
>> Adders are pretty durn simple.  I thought the CORDIC algorithm used an
>> ADD.
>
> Please forgive me for cutting this short:
>
> Tim: "do you use CORDIC?"
>
> Rick:  "No"
>
> Tim: "thank you"
>
> If you want more information about how it's used, please don't ask me --
> I'm asking YOU!!

Tim, that is a very strange reply.  It has been a long time since I've 
looked at the CORDIC and I couldn't remember the details of the math 
involved.  I didn't say *anything* about how it is used.

Your questions were:

"how commonly is CORDIC used?"

"Is it still a necessary go-to for anyone contemplating doing DSP in an 
FPGA, or is it starting to ease onto the off-ramp of history?"

Nowhere did you ask how it is used...


Rob Gaddi wrote, "I've considered it many times, but never used it." and 
you didn't feel the need to criticize him.

You made a statement that I don't think can be supported.  When I asked 
about that you give me a hard time...

Did you get up on the wrong side of bed or something?

-- 

Rick C

Article: 159355
Subject: Re: CORDIC in a land of built-in multipliers
From: gtwrek@sonic.net (Mark Curry)
Date: Fri, 14 Oct 2016 21:36:06 -0000 (UTC)
Links: << >>  << T >>  << A >>
In article <z5udnd7dVdjnR2LKnZ2dnUU7-c2dnZ2d@giganews.com>,
Tim Wescott  <seemywebsite@myfooter.really> wrote:
>On Thu, 13 Oct 2016 13:46:18 -0500, Tim Wescott wrote:
>
>> Now that FPGAs have built-in DSP blocks, how commonly is CORDIC used? 
>> Is it still a necessary go-to for anyone contemplating doing DSP in an
>> FPGA,
>> or is it starting to ease onto the off-ramp of history?
>> 
>> And, putting FPGA use aside -- how common is it is ASICs?
>
>Being bad because I'm cross-posting.  Being bad because I'm cross-posting 
>a reply to _my own post_.
>
>Oh well -- I'm thinking that maybe some of the folks on comp.dsp who 
>aren't also on comp.arch.fpga will have some thoughts.

Since I learned the algorithm in college, I've always wanted to implement it on
an ASIC or FPGA... And never had a place where it was the right fit.  
As some of the other posters have noted, gates and multipliers are becoming
almost "free" in many applications.  Block RAMs are also sometimes available to
just store trig. tables.  

I think I remember coding an primitive CORDIC algorithm in VHDL some time in
the 90s.  From what I recall, it wasn't used anywhere, I just wanted to see 
how to do it.

--Mark
 


Article: 159356
Subject: Re: CORDIC in a land of built-in multipliers
From: Tim Wescott <tim@seemywebsite.really>
Date: Fri, 14 Oct 2016 18:19:16 -0500
Links: << >>  << T >>  << A >>
On Fri, 14 Oct 2016 14:40:02 +0000, Steve Pope wrote:

> Cecil Bayona  <cbayona@cbayona.com> wrote:
> 
>>Cordic is used for items like sine, cosine, tangent, square root,etc
>>with involves multiplication and some division, but the Cordic
>>Algorithms eliminate multiply and divide so it simplifies the logic by
>>quite a bit.
> 
> I've been designing DSP ASIC's since the 1980's and have never chosen to
> use Cordic.  I can envision where Cordic might make sense --
> where the precision requirements are not really known or are very high,
> Cordic will win over LUT-based designs but most typically the LUT-based
> designs are a simpler way to meet the requirement.
> 
> It's also possible that in some PLD-based designs Cordic might be the
> way to go.
> 
> Steve

Are the ASICs you're using based on LUTs, or are they a sea-of-gates 
where you define the interconnect?

-- 
www.wescottdesign.com

Article: 159357
Subject: Re: CORDIC in a land of built-in multipliers
From: spope33@speedymail.org (Steve Pope)
Date: Sat, 15 Oct 2016 00:00:47 +0000 (UTC)
Links: << >>  << T >>  << A >>
Tim Wescott  <tim@seemywebsite.really> wrote:

>On Fri, 14 Oct 2016 14:40:02 +0000, Steve Pope wrote:

>> I've been designing DSP ASIC's since the 1980's and have never chosen to
>> use Cordic.  I can envision where Cordic might make sense --
>> where the precision requirements are not really known or are very high,
>> Cordic will win over LUT-based designs but most typically the LUT-based
>> designs are a simpler way to meet the requirement.

>> It's also possible that in some PLD-based designs Cordic might be the
>> way to go.

>Are the ASICs you're using based on LUTs, or are they a sea-of-gates 
>where you define the interconnect?

Interesting question.

In the early era (80's through early 90's)  if a LUT was needed in an ASIC 
it was a layout-based disign consisting of a mask-programmed ROM.  

Certain other logic functions (notably, ALU's and parallel multipliers)
would also be layout-based).  More random logic would be based on
routing together standard cells.  

(At the time, a "gate array" or "sea of gates" approach suggested the 
cells were very small, on the order of a gate or two as opposed to cells 
for D-flops, adder sections, etc.; and typically that only a few 
metal layers were customized.)

In the modern era, lookup tables are usually synthesized from gate
logic along with the rest of the random logic and physically, the
only large blocks in the digital part of the design are RAM's.
Of course the converters and other analog blocks are separate blocks.

The exception is large CPU chips where the ALU's, memory switches,
and so forth are still layout-based designs.  

If one has a photomicrograph of a chip you can usually see what
they are doing and how they've paritioned it.

Steve

Article: 159358
Subject: Re: CORDIC in a land of built-in multipliers
From: Allan Herriman <allanherriman@hotmail.com>
Date: 15 Oct 2016 07:35:30 GMT
Links: << >>  << T >>  << A >>
On Thu, 13 Oct 2016 13:46:18 -0500, Tim Wescott wrote:

> Now that FPGAs have built-in DSP blocks, how commonly is CORDIC used?  
Is 
> it still a necessary go-to for anyone contemplating doing DSP in an 
FPGA, 
> or is it starting to ease onto the off-ramp of history?
> 
> And, putting FPGA use aside -- how common is it is ASICs?

Ray Andraka used to be a frequent comp.arch.fpga poster.  On his website 
he lists several FPGA projects that use CORDIC, however I notice that 
these all seem to have older dates, e.g. from the Xilinx Virtex-E era.

http://www.andraka.com/cordic.php

Allan

Article: 159359
Subject: Re: CORDIC in a land of built-in multipliers
From: rickman <gnuarm@gmail.com>
Date: Sat, 15 Oct 2016 04:19:35 -0400
Links: << >>  << T >>  << A >>
On 10/15/2016 3:35 AM, Allan Herriman wrote:
> On Thu, 13 Oct 2016 13:46:18 -0500, Tim Wescott wrote:
>
>> Now that FPGAs have built-in DSP blocks, how commonly is CORDIC used?
> Is
>> it still a necessary go-to for anyone contemplating doing DSP in an
> FPGA,
>> or is it starting to ease onto the off-ramp of history?
>>
>> And, putting FPGA use aside -- how common is it is ASICs?
>
> Ray Andraka used to be a frequent comp.arch.fpga poster.  On his website
> he lists several FPGA projects that use CORDIC, however I notice that
> these all seem to have older dates, e.g. from the Xilinx Virtex-E era.
>
> http://www.andraka.com/cordic.php

He was very old school having developed libraries of hierarchical 
schematics for all manner of complex functions with full specification 
of placement of each logic function.  So he was not happy with the idea 
of changing over to HDL which Xilinx told him would be the only entry 
means supported eventually.  Once he gave it a serious try and 
discovered he could do the exact same thing with structural code, he was 
happily convinced HDL was the way to go.  I expect he has a full HDL 
library of CORDIC functions.

-- 

Rick C

Article: 159360
Subject: Re: CORDIC in a land of built-in multipliers
From: Evgeny Filatov <filatov.ev@mipt.ru>
Date: Sat, 15 Oct 2016 11:56:08 +0300
Links: << >>  << T >>  << A >>
On 14.10.2016 19:27, Steve Pope wrote:
> Evgeny Filatov  <filatov.ev@mipt.ru> wrote:
>
>> On 14/10/2016 06:35, Rafael Deliano wrote:
>
>>> But who knows after all these years what it is and
>>> how to implement it ?
>>> Same for Logarithmic Number Systems ( LNS ).
>
>> Why, logarithmic number systems are commonly used e.g. in the log-APP
>> algorithm, as a part of turbo-decoders.
>
> Yes, well, some turbo decoders and nearly all Viterbi decoders
> operate using log-likelyhood ratios (metrics) to represent the probability
> and measure values needed within the decoder.  But not all, and it
> is instructive to implement these decoders using measures instead
> of metrics.  Sometimes, in application the best implementation
> uses measures.
>
> The idea that turbo (or LDPC) decoders are welded at the hip to
> log domain representations is a narrow point of view .. IMO
>
> S.
>

Alright. By the way, Steve, do you follow polar codes? Looks like they 
have comparable performance to LDPC now:

http://sci-hub.cc/10.1109/JSAC.2015.2504299

Gene


Article: 159361
Subject: Re: CORDIC in a land of built-in multipliers
From: jim.brakefield@ieee.org
Date: Sat, 15 Oct 2016 07:49:19 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Thursday, October 13, 2016 at 10:51:13 PM UTC-5, Cecil Bayona wrote:
> On 10/13/2016 8:33 PM, rickman wrote:
> > On 10/13/2016 6:16 PM, Tim Wescott wrote:
> >> On Thu, 13 Oct 2016 18:14:58 -0400, rickman wrote:
> >>
> >>> On 10/13/2016 5:10 PM, Tim Wescott wrote:
> >>>> On Thu, 13 Oct 2016 20:59:49 +0000, Rob Gaddi wrote:
> >>>>
> >>>>> Tim Wescott wrote:
> >>>>>
> >>>>>> On Thu, 13 Oct 2016 13:46:18 -0500, Tim Wescott wrote:
> >>>>>>
> >>>>>>> Now that FPGAs have built-in DSP blocks, how commonly is CORDIC
> >>>>>>> used? Is it still a necessary go-to for anyone contemplating doing
> >>>>>>> DSP in an FPGA,
> >>>>>>> or is it starting to ease onto the off-ramp of history?
> >>>>>>>
> >>>>>>> And, putting FPGA use aside -- how common is it is ASICs?
> >>>>>>
> >>>>>> Being bad because I'm cross-posting.  Being bad because I'm
> >>>>>> cross-posting a reply to _my own post_.
> >>>>>>
> >>>>>> Oh well -- I'm thinking that maybe some of the folks on comp.dsp who
> >>>>>> aren't also on comp.arch.fpga will have some thoughts.
> >>>>>>
> >>>>>>
> >>>>> I've considered it many times, but never used it.  Then again it's not
> >>>>> like I use the DSP blocks for CORDIC sorts of things anyhow; I just
> >>>>> throw a RAM lookup table at the problem.
> >>>>
> >>>> That was the other thing that I should have mentioned.
> >>>>
> >>>> I've heard a lot of talk about CORDIC, but it seems to be one of those
> >>>> things that was Really Critically Important back when an Intel 4004
> >>>> cost (reputedly) $5 in then-dollars, but maybe isn't so important now,
> >>>> when it seems like the package costs more than the silicon inside it.
> >>>
> >>> There are still FPGAs at the lower end that don't include multipliers. I
> >>> have an older design in a small FPGA I am still shipping that does the
> >>> iterative adding thing.  It has been a while since I considered using
> >>> the CORDIC algorithm.  What exactly is the advantage of the CORDIC
> >>> algorithm again?
> >>
> >> It uses less gates.  Which is why I'm asking my question -- gates seem to
> >> be a lot cheaper these days, so do people still use CORDIC?
> >
> > Can you explain that briefly?  Why is it less gates than an adder?
> > Adders are pretty durn simple.  I thought the CORDIC algorithm used an ADD.
> >
> Cordic is used for items like sine, cosine, tangent, square root,etc 
> with involves multiplication and some division, but the Cordic 
> Algorithms eliminate multiply and divide so it simplifies the logic by 
> quite a bit. It's mainly used with devices that do not have multiply and 
> divide capability even then Cordic could be faster.
> 
> -- 
> Cecil - k5nwa

]> Cordic is used for items like sine, cosine, tangent, square root,etc

Used it for pixel-rate polar to rectangular (needed both angle & radius).
Vendor's CORDIC IP worked fine.  It used a ~12 stage pipeline.

Article: 159362
Subject: Re: CORDIC in a land of built-in multipliers
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Sun, 16 Oct 2016 10:54:33 -0700 (PDT)
Links: << >>  << T >>  << A >>
>=20
> I've considered it many times, but never used it.  Then again it's not
> like I use the DSP blocks for CORDIC sorts of things anyhow; I just
> throw a RAM lookup table at the problem.
>=20
> --=20
> Rob Gaddi, Highland Technology -- www.highlandtechnology.com

Same here.  CORDIC is something that is cool, but does not seem useful any =
longer.  It takes too many cycles and is only useful if you don't care abou=
t throughput, but in those applications, you are probably using a processor=
, not an FPGA.  You keep the latency but increase throughput by pipelining,=
 but then it's bigger than an alternative solution.  If you can, you use a =
blockRAM lookup table for trig functions.  Otherwise you'd probably use emb=
edded multipliers and a Taylor series (with Horner's Rule).  Or a hybrid, s=
uch as a Farrow function, which can be as simple as interpolating between l=
ookup elements.  The Farrow idea works really well in sine lookups since th=
e derivative of sine is a shifted sine, so you can use the same lookup tabl=
e.  That's what I've used.

For arctan(y/x), which is needed in phase recovery, I've used 2-dimensional=
 lookup tables, where the input is a concatenated {x,y}.  You don't need a =
lot of precision.  There are also good approximations for things like sqrt(=
x^2+y^2).

A lot of ideas persist for a long time in textbooks and practice when they =
are no longer useful.  I do error correction now and all the textbooks stil=
l show, for example, how to build a Reed-Solomon encoder that does 1 symbol=
 per cycle.  If I wanted to do something that slowly, I'd probably do it in=
 software.  Sure a lot easier.  FPGAs are only useful if you are doing thin=
gs really fast.


Article: 159363
Subject: Re: CORDIC in a land of built-in multipliers
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Sun, 16 Oct 2016 11:05:49 -0700 (PDT)
Links: << >>  << T >>  << A >>
> 
> I've used a CORDIC to compute the magnitude of a complex signal.
> 
> I found it easier than calculating the square root.
> 
> Rob.

True, but there are probably simpler ways.  You can use a 2D lookup in a blockRAM.  And an approximation that can be reasonably precise is

  mag = A*max(I,Q) + B*min(I,Q)

where A,B are constants.  (You can find suggestions for these on the web.)

Article: 159364
Subject: Re: FPGA LABVIEW programming
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Sun, 16 Oct 2016 11:10:04 -0700 (PDT)
Links: << >>  << T >>  << A >>
> I can look at a schematic and tell a circuit is a shift register, I can
> look at VHDL/Verilog code and see a shift register, but that mess of
> LabView crud they hade on the screen looked NOTHING like a shift
> register.  I know this is pretty much version 1 of their FPGA software,
> but it will take much more work before it is ever usefull as a design
> tool.
> 
> I can't see how this will ever catch on for LabView...

That's been my impression of any "high-level" tool I've used, especially any graphical ones.  They are possible in theory, but in practice they are generally terrible.

Article: 159365
Subject: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a
From: rickman <gnuarm@gmail.com>
Date: Sun, 16 Oct 2016 20:22:29 -0400
Links: << >>  << T >>  << A >>
I found this pretty impressive.  I wonder if this is why Intel bought 
Altera or if they are not working together on this?  Ulpp!  Seak and yea 
shall find....

"Microsoft is using so many FPGA the company has a direct influence over 
the global FPGA supply and demand. Intel executive vice president, Diane 
Bryant, has already stated that Microsoft is the main reason behind 
Intel's decision to acquire FPGA-maker, Altera."

#Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a 
Second http://hubs.ly/H04JLSp0

I guess this will only steer the FPGA market more in the direction of 
larger and faster rather than giving us much at the low end of energy 
efficient and small FPGAs.  That's where I like to live.

-- 

Rick C

Article: 159366
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a Second
From: krw <krw@nowhere.com>
Date: Sun, 16 Oct 2016 20:53:11 -0400
Links: << >>  << T >>  << A >>
On Sun, 16 Oct 2016 20:22:29 -0400, rickman <gnuarm@gmail.com> wrote:

>I found this pretty impressive.  I wonder if this is why Intel bought 
>Altera or if they are not working together on this?  Ulpp!  Seak and yea 
>shall find....
>
>"Microsoft is using so many FPGA the company has a direct influence over 
>the global FPGA supply and demand. Intel executive vice president, Diane 
>Bryant, has already stated that Microsoft is the main reason behind 
>Intel's decision to acquire FPGA-maker, Altera."
>
>#Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a 
>Second http://hubs.ly/H04JLSp0
>
>I guess this will only steer the FPGA market more in the direction of 
>larger and faster rather than giving us much at the low end of energy 
>efficient and small FPGAs.  That's where I like to live.

No, it may mean that Altera won't play there but someone surely will.

Article: 159367
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a
From: Tim Wescott <tim@seemywebsite.really>
Date: Sun, 16 Oct 2016 19:55:13 -0500
Links: << >>  << T >>  << A >>
On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote:

> I found this pretty impressive.  I wonder if this is why Intel bought
> Altera or if they are not working together on this?  Ulpp!  Seak and yea
> shall find....
> 
> "Microsoft is using so many FPGA the company has a direct influence over
> the global FPGA supply and demand. Intel executive vice president, Diane
> Bryant, has already stated that Microsoft is the main reason behind
> Intel's decision to acquire FPGA-maker, Altera."
> 
> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a
> Second http://hubs.ly/H04JLSp0
> 
> I guess this will only steer the FPGA market more in the direction of
> larger and faster rather than giving us much at the low end of energy
> efficient and small FPGAs.  That's where I like to live.

Hopefully it'll create a vacuum into which other companies will grow.  
Very possibly not without some pain in the interim.  Markets change, we 
have to adapt.

-- 
www.wescottdesign.com

Article: 159368
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a Second
From: quiasmox@yahoo.com
Date: Sun, 16 Oct 2016 22:00:13 -0500
Links: << >>  << T >>  << A >>
On Sun, 16 Oct 2016 20:22:29 -0400, rickman <gnuarm@gmail.com> wrote:

>I found this pretty impressive. 

Translates it where? Across the room?
To what? Rot13?

-- 
John

Article: 159369
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a Second
From: boB <boB@theresnoplacelikehome.com>
Date: Mon, 17 Oct 2016 00:36:40 -0700
Links: << >>  << T >>  << A >>
On Sun, 16 Oct 2016 22:00:13 -0500, quiasmox@yahoo.com wrote:

>On Sun, 16 Oct 2016 20:22:29 -0400, rickman <gnuarm@gmail.com> wrote:
>
>>I found this pretty impressive. 
>
>Translates it where? Across the room?
>To what? Rot13?

That's what I wanted to know.  The article doesn't say.

Seems pretty useless like a lot of media blurbs about things the
editors know nothing about.

boB



Article: 159370
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a
From: rickman <gnuarm@gmail.com>
Date: Mon, 17 Oct 2016 03:56:46 -0400
Links: << >>  << T >>  << A >>
On 10/16/2016 8:55 PM, Tim Wescott wrote:
> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote:
>
>> I found this pretty impressive.  I wonder if this is why Intel bought
>> Altera or if they are not working together on this?  Ulpp!  Seak and yea
>> shall find....
>>
>> "Microsoft is using so many FPGA the company has a direct influence over
>> the global FPGA supply and demand. Intel executive vice president, Diane
>> Bryant, has already stated that Microsoft is the main reason behind
>> Intel's decision to acquire FPGA-maker, Altera."
>>
>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a
>> Second http://hubs.ly/H04JLSp0
>>
>> I guess this will only steer the FPGA market more in the direction of
>> larger and faster rather than giving us much at the low end of energy
>> efficient and small FPGAs.  That's where I like to live.
>
> Hopefully it'll create a vacuum into which other companies will grow.
> Very possibly not without some pain in the interim.  Markets change, we
> have to adapt.

I've never been clear on the fundamental forces in the FPGA business. 
The major FPGA companies have operated very similarly catering to the 
telecom markets while giving pretty much lip service to the rest of the 
electronics world.

I suppose there is a difference in technology requirements between MCUs 
and FPGAs.  MCUs often are not even near the bleeding edge of process 
technology while FPGAs seem to drive it to some extent.  Other than 
Intel who seems to always be the first to bring chips out at a given 
process node, the FPGA companies are a close second.  But again, I think 
that is driven by their serving the telecom market where density is king.

So I don't see any fundamental reasons why FPGAs can't be built on older 
processes to keep price down.  If MCUs can be made in a million 
combinations of RAM, Flash and peripherals, why can't FPGAs?  Even 
analog is used in MCUs, why can't FPGAs be made with the same processes 
giving us programmable logic combined with a variety of ADC, DAC and 
comparators on the same die.  Put them in smaller packages (lower pin 
counts, not the micro pitch BGAs) and let them to be used like MCUs.

Maybe the market just isn't there.  Many seem to feel FPGAs are much 
harder to work with than MCUs.  To me they are much simpler.

-- 

Rick C

Article: 159371
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a
From: rickman <gnuarm@gmail.com>
Date: Mon, 17 Oct 2016 04:00:55 -0400
Links: << >>  << T >>  << A >>
On 10/16/2016 11:00 PM, quiasmox@yahoo.com wrote:
> On Sun, 16 Oct 2016 20:22:29 -0400, rickman <gnuarm@gmail.com> wrote:
>
>> I found this pretty impressive.
>
> Translates it where? Across the room?
> To what? Rot13?

Did you read the article?  They are designing Internet servers that will 
operate much faster and at lower power levels.  I believe a translation 
app is being used as a benchmark.  It's not like websites are never 
translated.

-- 

Rick C

Article: 159372
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of
From: David Brown <david.brown@hesbynett.no>
Date: Mon, 17 Oct 2016 12:25:53 +0200
Links: << >>  << T >>  << A >>
On 17/10/16 09:56, rickman wrote:
> On 10/16/2016 8:55 PM, Tim Wescott wrote:
>> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote:
>>
>>> I found this pretty impressive.  I wonder if this is why Intel bought
>>> Altera or if they are not working together on this?  Ulpp!  Seak and yea
>>> shall find....
>>>
>>> "Microsoft is using so many FPGA the company has a direct influence over
>>> the global FPGA supply and demand. Intel executive vice president, Diane
>>> Bryant, has already stated that Microsoft is the main reason behind
>>> Intel's decision to acquire FPGA-maker, Altera."
>>>
>>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a
>>> Second http://hubs.ly/H04JLSp0
>>>
>>> I guess this will only steer the FPGA market more in the direction of
>>> larger and faster rather than giving us much at the low end of energy
>>> efficient and small FPGAs.  That's where I like to live.
>>
>> Hopefully it'll create a vacuum into which other companies will grow.
>> Very possibly not without some pain in the interim.  Markets change, we
>> have to adapt.
> 
> I've never been clear on the fundamental forces in the FPGA business.
> The major FPGA companies have operated very similarly catering to the
> telecom markets while giving pretty much lip service to the rest of the
> electronics world.
> 
> I suppose there is a difference in technology requirements between MCUs
> and FPGAs.  MCUs often are not even near the bleeding edge of process
> technology while FPGAs seem to drive it to some extent.  Other than
> Intel who seems to always be the first to bring chips out at a given
> process node, the FPGA companies are a close second.  But again, I think
> that is driven by their serving the telecom market where density is king.
> 
> So I don't see any fundamental reasons why FPGAs can't be built on older
> processes to keep price down.  If MCUs can be made in a million
> combinations of RAM, Flash and peripherals, why can't FPGAs?  Even
> analog is used in MCUs, why can't FPGAs be made with the same processes
> giving us programmable logic combined with a variety of ADC, DAC and
> comparators on the same die.  Put them in smaller packages (lower pin
> counts, not the micro pitch BGAs) and let them to be used like MCUs.

As far as I understand it, there is quite a variation in the types of
processes used - it's not just about the feature size.  The number of
layers, the types of layers, the types of doping, the fault tolerance,
etc., all play a part in what fits well on the same die.  So you might
easily find that if you put an ADC on a die setup that was good for FPGA
fabric, then the ADC would be a lot worse (speed, accuracy, power
consumption, noise, cost) than usual.  Alternatively, your die setup
could be good for the ADC - and then it would give a poor quality FPGA part.

Microcontrollers are made with a compromise.  The cpu part is not as
fast or efficient as a pure cpu could be, nor is the flash part, nor the
analogue parts.  But they are all good enough that the combination is a
saving (in dollars and watts, as well as mm²) overall.

But I think there are some FPGA's with basic analogue parts, and
certainly with flash.  There are also microcontrollers with some
programmable logic (more CPLD-type logic than FPGA).  Maybe we will see
more "compromise" parts in the future, but I doubt if we will see good
analogue bits and good FPGA bits on the same die.

What will, I think, make more of a difference is multi-die packaging -
either as side-by-side dies or horizontally layered dies.  But I expect
that to be more on the high-end first (like FPGA die combined with big
ram blocks).


> 
> Maybe the market just isn't there.  Many seem to feel FPGAs are much
> harder to work with than MCUs.  To me they are much simpler.
> 

I think that is habit and familiarity - there is a lot of difference to
the mindset for FPGA programming and MCU programming.  I don't think you
can say that one type of development is fundamentally harder or easier
than the other, but the simple fact is that a great deal more people are
familiar with programming serial execution devices than with developing
for programmable logic.


Article: 159373
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a Second
From: John Larkin <jjlarkin@highlandtechnology.com>
Date: Mon, 17 Oct 2016 09:15:17 -0700
Links: << >>  << T >>  << A >>
On Sun, 16 Oct 2016 19:55:13 -0500, Tim Wescott
<tim@seemywebsite.really> wrote:

>On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote:
>
>> I found this pretty impressive.  I wonder if this is why Intel bought
>> Altera or if they are not working together on this?  Ulpp!  Seak and yea
>> shall find....
>> 
>> "Microsoft is using so many FPGA the company has a direct influence over
>> the global FPGA supply and demand. Intel executive vice president, Diane
>> Bryant, has already stated that Microsoft is the main reason behind
>> Intel's decision to acquire FPGA-maker, Altera."
>> 
>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a
>> Second http://hubs.ly/H04JLSp0
>> 
>> I guess this will only steer the FPGA market more in the direction of
>> larger and faster rather than giving us much at the low end of energy
>> efficient and small FPGAs.  That's where I like to live.
>
>Hopefully it'll create a vacuum into which other companies will grow.  
>Very possibly not without some pain in the interim.  Markets change, we 
>have to adapt.

The interim pain includes an almost total absence of tech support for
the smaller users. The biggies get a team of full-time, on-site
support people; small users can't get any support from the principals,
and maybe a little mediocre support from distributors.

That trend is almost universal, but it's worst with FPGAs, where the
tools are enormously complex and correspondingly buggy. Got a problem?
Post it on a forum.




-- 

John Larkin         Highland Technology, Inc

lunatic fringe electronics 


Article: 159374
Subject: Re: Microsoft's FPGA Translates Wikipedia in less than a Tenth of a
From: rickman <gnuarm@gmail.com>
Date: Mon, 17 Oct 2016 18:45:09 -0400
Links: << >>  << T >>  << A >>
On 10/17/2016 6:25 AM, David Brown wrote:
> On 17/10/16 09:56, rickman wrote:
>> On 10/16/2016 8:55 PM, Tim Wescott wrote:
>>> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote:
>>>
>>>> I found this pretty impressive.  I wonder if this is why Intel bought
>>>> Altera or if they are not working together on this?  Ulpp!  Seak and yea
>>>> shall find....
>>>>
>>>> "Microsoft is using so many FPGA the company has a direct influence over
>>>> the global FPGA supply and demand. Intel executive vice president, Diane
>>>> Bryant, has already stated that Microsoft is the main reason behind
>>>> Intel's decision to acquire FPGA-maker, Altera."
>>>>
>>>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a
>>>> Second http://hubs.ly/H04JLSp0
>>>>
>>>> I guess this will only steer the FPGA market more in the direction of
>>>> larger and faster rather than giving us much at the low end of energy
>>>> efficient and small FPGAs.  That's where I like to live.
>>>
>>> Hopefully it'll create a vacuum into which other companies will grow.
>>> Very possibly not without some pain in the interim.  Markets change, we
>>> have to adapt.
>>
>> I've never been clear on the fundamental forces in the FPGA business.
>> The major FPGA companies have operated very similarly catering to the
>> telecom markets while giving pretty much lip service to the rest of the
>> electronics world.
>>
>> I suppose there is a difference in technology requirements between MCUs
>> and FPGAs.  MCUs often are not even near the bleeding edge of process
>> technology while FPGAs seem to drive it to some extent.  Other than
>> Intel who seems to always be the first to bring chips out at a given
>> process node, the FPGA companies are a close second.  But again, I think
>> that is driven by their serving the telecom market where density is king.
>>
>> So I don't see any fundamental reasons why FPGAs can't be built on older
>> processes to keep price down.  If MCUs can be made in a million
>> combinations of RAM, Flash and peripherals, why can't FPGAs?  Even
>> analog is used in MCUs, why can't FPGAs be made with the same processes
>> giving us programmable logic combined with a variety of ADC, DAC and
>> comparators on the same die.  Put them in smaller packages (lower pin
>> counts, not the micro pitch BGAs) and let them to be used like MCUs.
>
> As far as I understand it, there is quite a variation in the types of
> processes used - it's not just about the feature size.  The number of
> layers, the types of layers, the types of doping, the fault tolerance,
> etc., all play a part in what fits well on the same die.  So you might
> easily find that if you put an ADC on a die setup that was good for FPGA
> fabric, then the ADC would be a lot worse (speed, accuracy, power
> consumption, noise, cost) than usual.  Alternatively, your die setup
> could be good for the ADC - and then it would give a poor quality FPGA part.

What's a "poor" FPGA?  MCUs have digital and usually as fast as possible 
digital.  They also want the lowest possible power consumption.  What 
part of that is bad for an FPGA?  Forget the analog.  What do you 
sacrifice by building FPGAs on a line that works well for CPUs with 
Flash and RAM?  If you can also build decent analog with that you get an 
MCU/FPGA/Analog device that is no worse than current MCUs.


> Microcontrollers are made with a compromise.  The cpu part is not as
> fast or efficient as a pure cpu could be, nor is the flash part, nor the
> analogue parts.  But they are all good enough that the combination is a
> saving (in dollars and watts, as well as mm²) overall.

It's not much of a compromise.  As you say, they are all good enough.  I 
am sure an FPGA could be combined with little loss of what defines an FPGA.


> But I think there are some FPGA's with basic analogue parts, and
> certainly with flash.  There are also microcontrollers with some
> programmable logic (more CPLD-type logic than FPGA).  Maybe we will see
> more "compromise" parts in the future, but I doubt if we will see good
> analogue bits and good FPGA bits on the same die.

I know of one (well one line) from Microsemi (formerly Actel), 
SmartFusion (not to be confused with SmartFusion2).  They have a CM3 
with SAR ADC and sigma-delta DAC, comparators, etc in addition to the 
FPGA.  So clearly this is possible and it is really a marketing issue, 
not a technical one.

The focus seems to be on the FPGA, but they do give a decent amount of 
Flash and RAM (up to 512 and 64 kB respectively).  My main issue is the 
very large packages, all BGA except for the ginormous TQ144.  I'd like 
to see 64 and 100 pin QFPs.


> What will, I think, make more of a difference is multi-die packaging -
> either as side-by-side dies or horizontally layered dies.  But I expect
> that to be more on the high-end first (like FPGA die combined with big
> ram blocks).

Very pointless not to mention costly.  You lose a lot running the FPGA 
to MCU interface through I/O pads for some applications.  That is how 
Intel combined FPGA with their x86 CPUs initially though.  But it is a 
very pricey result.


>> Maybe the market just isn't there.  Many seem to feel FPGAs are much
>> harder to work with than MCUs.  To me they are much simpler.
>>
>
> I think that is habit and familiarity - there is a lot of difference to
> the mindset for FPGA programming and MCU programming.  I don't think you
> can say that one type of development is fundamentally harder or easier
> than the other, but the simple fact is that a great deal more people are
> familiar with programming serial execution devices than with developing
> for programmable logic.

The main difference between programming MCUs and FPGAs is you don't need 
to be concerned with the problems of virtual multitasking (sharing one 
processor between many tasks).  Otherwise FPGAs are pretty durn simple 
to use really.  For sure, some tasks fit well in an MCU.  If you have 
the performance they can be done in an MCU, but that is not a reason why 
they can't be done in an FPGA just as easily.  I know, many times I've 
taken an MCU algorithm and coded it into HDL.  The hard part is 
understanding what the MCU code is doing.

-- 

Rick C



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMar2019

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search