I apologize if this is the wrong place to ask things like this.
I've got more of theory based question today (I'm relatively new to signal processing/etc).
As this is the case I'm happy with just being told what sort of things I should research - the main issue seems to be that I don't know what I don't know.
For context, for the past month or so I've been building a signal generator using an AD9959 and an arduino due.
The purpose of this build was to get 3 channels outputting simultaneously, 1 channel having a constant frequency and the other 2 channels both doing independent frequency sweeps and amplitude ramps - this I have managed to do. The latter was achieved by using a for loop to assign amplitude values to the registers - this was done due to the limitations of the inbuilt amplitude ramping of the DDS, namely with a 500MHz the slowest I could get the ramp intervals was 2.048 us, an order of magnitude too fast for my purpose.
Now, an issue that I have been aware of for a while is that I get a variation in power with frequency - attached is a graph of some data illustrating this (I have set the power output to 'max' in the ACR register for each frequency).
My frequency range of interest is from ~20-110MHz.
My colleagues do not believe that my cables/wiring are the main reason this is happening, rather they suspect it has something to do with how the DDS works.
I do have a rough understanding of why the power drops with an increasing frequency. The system clock is sampled in a sense, i.e. with a 500MHz, at 125MHz the output would be generated from only the peaks/troughs and 'zero's' of the clock sine wave (i.e. 0, 1, 0, -1, 0). With a desired output being 250MHz, exactly half of 500MHz, the clock would be sample at only the 'zero's' of its sine wave, hence why I get no signal at 250MHz.
If my understanding is incorrect or id I'm just spouting nonsense then I'd appreciate it if I could be corrected! (I have read through the data sheet and All About Direct Digital Synthesis | Analog Devices ).
However can not wrap my head around is why I am getting the peak at ~20MHz and why I get a sharp drop off at 170MHz - what mechanisms could be behind this?
Any suggestions/explanations or pointers on topics to research would be greatly appreciated - I have an idea of a way I could correct for this in my code but I would very much like to actually understand why this is happening to begin with.
Now, the way I'm planning on correcting for this is by determining a power vs frequency function and correcting the amplitude/scaling in my for loop (syncing the timing of my AM for loop to the frequency sweep RDW and LSRR registers).
When I mean correcting I mean that inputting "100% amplitude" would give the same power regardless of frequency. Say with my range of 20MHz to 110MHz, then inputting "100% amplitude" to my device would give an output of -13.2 dBm (the max value at 110MHz) regardless of the frequency setting (provided less than 110MHz).
The issue with this would be that my amplitude resolution would be decreased at lower frequencies (for example, in order to get -13.2 dBm I'd need to send say "60% power" to the ACR register when the frequency is set to 20MHz).
Any suggestions on a better way to tackle this would be appreciated, as I find my proposed solution to be a bit crude.
Thank you for taking the time to read my wall of text, any and all advice/suggestions are greatly appreciated.