Master
FAQ
 
 

Below you will find frequently asked questions, divided over four different groups. First, a generic FAQ with information applying to a broad set of master degrees and then more specific FAQs applying to specific programs only.

FAQ for all master degrees

“Signal Processing in the AI era” was the tagline of this year’s IEEE International Conference on Acoustics, Speech and Signal Processing, taking place in Rhodes, Greece.

In this context, Brent de Weerdt, Xiangyu Yang, Boris Joukovsky, Alex Stergiou and Nikos Deligiannis presented ETRO’s research during poster sessions and oral presentations, with novel ways to process and understand graph, video, and audio data. Nikos Deligiannis chaired a session on Graph Deep Learning, attended the IEEE T-IP Editorial Board Meeting, and had the opportunity to meet with collaborators from the VUB-Duke-Ugent-UCL joint lab.

Featured articles:

Happy kids visited the ETRO Build your climate-proof LEGO city boot at  CurieuCity and it was also broadcasted on Bruzz tv this weekend.

https://curieucity.brussels/nl/build-your-climate-resistant-city-of-the-future/

ETRO’s spin-off Exia was extensively in the picture today on the national news. The sensor technology can detect objects or persons no matter of their size and avoid deadly accidents with trucks and busses.

Well done!

https://www.vrt.be/vrtnws/nl/kijk/2024/05/02/dodehoekongevallen-repo-arvato-65749764/

BEKIJK – VUB-onderzoek redt levens in het verkeer dankzij innovatieve dodehoeksensoren (knack.be)).

FWO granted the project Exploiting plasma etching processes for micro/nanotexturing of metal surfaces to enable novel chemical, analytical, optical, and medical applications.

The plasma metal etcher will be installed in the core facility MICROLAB. The project execution will be coordinated by Prof. Wim de Malsche (CHIS) with two ETRO-promoters Prof. Johan Stiens and Prof. Peter Schelkens. Step by step the microfabrication facilities are growing, allowing to dive deeper again in microfabrication related projects.

https://vub.sharepoint.com/sites/PUB_PhD/SitePages/Activities,-language,-publishing-and-travel-grants-for-PhD-candidates.aspx

Requests are collected and evaluated at the harvesting moments:

  • 1 February
  • 1 April
  • 1 June
  • 1 October
  • 1 December

PhD candidates will be informed (about) one week after the respective harvest moment.

An Innoviris-funded project, called eTailor with Treedy’s  and ETRO-VUB for full-body scanner that extrapolates your size without you even having to take your clothes off. It will be deployed in the Decathlon shops (and not only) worldwide.

The IP behind this tech is for a part shared IP between VUB and Treedy’s. A VUB-Treedy’s patent was recently accepted covering the technology that enables estimating the body shape under clothing and taking automatically measurements on each scanned person.

 eTailor is an example of how an industrial project should run: one could achieve both academic and industrial excellence.

Brussels Times,

L’Echo,

Solar Impulse,

Knack,

La Libre

Prof. Dr. Em. Roger Vounckx gave his last lecture last Friday 29 March 2024.

Many thanks for all the insights you gave us at ETRO, the students and VUB. Both in science and in many other topics.

Cheers!

Some ETRO staff went to the recent gala ball organised by the engineering students association (PK). We also know how to partyyyy!

At the recent FTI open campus day the ETRO demos caught a lot of attention as the pictures show. A wide range of visitors were fascinated by discovering the brain from the inside to the maker engineer asking very technical questions and the Flemish Minister Benjamin Dalle was also present.

After the conservation and restoration project for Jan Van Eyck’s masterpiece, the Royal Institute for Cultural Heritage documented both sides of the painting with hundreds of macro photography photos. Universum Digitalis then algorithmically assembled those images to produce gigapixel images of the artwork. The painting was previously digitized in 2015 using the same scientific protocol. Universum Digitalis seamlessly aligned both acquisitions, enabling a unique pixel-level comparison before and after restoration.

Comparison of the front and backside before and after restoration.

The restored painting will be exhibited at The Louvre Museum during the exhibition “Revoir Van Eyck – La Vierge du chancelier Rolin” from March 20th to June 17th, 2024. In parallel with the exhibition’s opening, the gigapixel images produced by Universum Digitalis will be made publicly accessible on the Closer to Van Eyck website.

https://www.louvre.fr/en/what-s-on/exhibitions/a-new-look-at-jan-van-eyck

https://closertovaneyck.be

After two years of dedicated research and development under the leadership of ETRO-VUB, a breakthrough has been achieved within the INTOWALL project: a revolutionary radar technology for building inspection was developed, called the transient radar method (TRM). The initiative aimed to reduce the CO2 emissions of buildings and increase their energy efficiency.

The new technology enables the measurement of the density of glass wool in cavity walls with unprecedented precision, without the need for invasive methods. “This advancement not only promises to improve the accuracy of insulation assessments but also contributes to the ambition to achieve a CO2-neutral status by 2050,” says Professor Johan Stiens of ETRO.

Looking towards the future, the project team is focused on further refining the technology to map a wide range of insulation materials and building elements. This prospect of expansion and application on a larger scale highlights the endless possibilities. As part of the FTI Brussels Festival, the milestone of the INTOWALL project will be celebrated. A unique demonstration was held on March 18, 2024.

Additionally, the project team invites potential partners to contribute to and participate in this groundbreaking endeavour. Through collaboration, we can transform the construction sector into a more sustainable and efficient future.

For more information on InToWall press articles: https://press.vub.ac.be/wereldprimeur-in-radartechnologie and https://trends.knack.be/kanaal-z/z-nieuws/bekijk-radar-van-vub-ziet-isolatie-dwars-door-muren-heen/

On March 29th 2024 at 17::00, Jurgen Vandendriessche will defend their PhD entitled “TOWARDS SMART ACOUSTIC CAMERAS FOR SIMULTANEOUS SOUND LOCALIZATION AND RECOGNITION”.

Everybody is invited to attend the presentation in room I.0.01, or digitally via this link.

Abstract 

Acoustic cameras are devices that visualize sound by utilizing an array of microphones. The signal from each microphone is combined using a beamforming algorithm to generate an acoustic heatmap or acoustic image. These beamforming algorithms tend to have a high computational cost, which increases with the number of microphones. The combination of a high number of Input/Output (I/O) requirements for the microphones combined with the high amount of parallel computations makes Field Programmable Gate Arrays (FPGAs) very suitable for processing the signals from these microphone arrays. FPGAs have a low power consumption, which makes them especially viable when targeting battery powered devices such as handheld acoustic cameras or nodes in a sensor network. Despite the high computational power per watt of FPGAs, satisfying real time scenarios still present a challenge, especially when targeting acoustic images with a higher resolution. To overcome this challenge, a multi-mode acoustic camera has been developed. The camera supports multiple modes depending on the task at hand. To satisfy the real time requirement for each mode, the resolution of the acoustic heatmap can be adapted. A second limitation of the existing acoustic cameras is the identification of the type of sound, which commonly requires human expertise to recognize and profile the sound.

In recent years, deep learning, a form of Artificial intelligence (AI), has shown promising results towards the task of sound recognition by using Convolutional Neural Networks (CNNs). However, most of the research focuses on improving the accuracy of such models without considering the limitations one encounters when deploying such a model on resource constrained devices such as FPGAs. FPGAs are used nowadays for embedding deep learning inference, mainly using two architectures. One type of architecture uses a general-purpose soft-core inside the Programmable Logic (PL) of the FPGA. On the other hand, there are also dataflow-based architectures that translate each layer in a CNN to a functional block in the PL. Embedding these CNNs for inference on FPGAs is not a trivial task and come with trade-offs in terms of resource consumption, accuracy, supported layers,… These two architectures are compared against other embedded solutions such as Google’s edge Tensor Processing Unit (TPU) and a Raspberry Pi (RPi) to find the best fit for acoustic cameras. Acoustic cameras are targeted in this instance because they can identify the location of a sound source, which is not possible when using one microphone. Furthermore, existing beamforming techniques such as delay-and-sum reconstruct audio signals, which can be used for audio classification tasks.

On March 29th 2024 at 16::00, Lucas Santana will defend their PhD entitled “TOWARDS UNCHARTED TERRITORIES: HIGH-PERFORMANCE AND HIGH-BANDWIDTH RINGAMP-BASED DELTA-SIGMA ADCs”.

Everybody is invited to attend the presentation in room D.2.01, or digitally via this link.

Abstract 

Analog-to-digital (ADC) research often happens in an agnostic detachment from the intended application; although motivation is sometimes presented, it is not always implemented with the proposed prototype. Advancements in ADC linearity and speed enable applications that were nonexistent before to emerge, such as direct RF conversion and 8k camera recording. Most ADC architectures cover all regions of the performance space, being at the forefront of the state-of-the-art for some areas and not so much for others. This high coverage enables the use of the advantages and peculiarities of different architectures across different applications. One notable architecture that does not perform this is the Discrete Time (DT) Delta-Sigma Modulator (DSM) ADC, in which the published state-of-the-art bandwidth front is limited to 20 MHz. This work investigates this limitation, showing that it can be overcome with high-efficiency ring amplifiers (ringamps) and the correct design process. This work presents a prototype for a single loop 3rd-order DT DSM ADC based on ringamps for the loop filter that could double the bandwidth reached by DT DSM ADC at 47.5 MHz and achieve 67 dB of signal-tonoise and distortion ratio (SNDR) when clocked at 950 MHz. It also shows outstanding figures of merit (FoM): the Schreier FoM, FoMs is 167 dB and the Walden FoM, FoMw is 27 fJ per conversion step. The second prototype used time interleaving to improve the sampling rate and bandwidth further and used a noise-coupled (NC) noiseshaping (NS) SAR quantizer to enable aliased noise suppression. It achieved 1.4 GS/s of sampling rate, a decimated bandwidth of 70 MHz at a peak SNDR of 67 dB, with a power consumption of 32 mW; this translated to a FoMs of 160 dB and a FoMw of 143 fJ/c.s. Both prototypes were the first to pave the way to increase the bandwidth in DT DSM ADC efficiently and can still benefit from recent developments in ringamps and noise-shaping SAR ADCs, leading the architecture to conquer even more space in this uncharted territory.




FAQ for the Master Applied Computer Science

“Signal Processing in the AI era” was the tagline of this year’s IEEE International Conference on Acoustics, Speech and Signal Processing, taking place in Rhodes, Greece.

In this context, Brent de Weerdt, Xiangyu Yang, Boris Joukovsky, Alex Stergiou and Nikos Deligiannis presented ETRO’s research during poster sessions and oral presentations, with novel ways to process and understand graph, video, and audio data. Nikos Deligiannis chaired a session on Graph Deep Learning, attended the IEEE T-IP Editorial Board Meeting, and had the opportunity to meet with collaborators from the VUB-Duke-Ugent-UCL joint lab.

Featured articles:

Happy kids visited the ETRO Build your climate-proof LEGO city boot at  CurieuCity and it was also broadcasted on Bruzz tv this weekend.

https://curieucity.brussels/nl/build-your-climate-resistant-city-of-the-future/

ETRO’s spin-off Exia was extensively in the picture today on the national news. The sensor technology can detect objects or persons no matter of their size and avoid deadly accidents with trucks and busses.

Well done!

https://www.vrt.be/vrtnws/nl/kijk/2024/05/02/dodehoekongevallen-repo-arvato-65749764/

BEKIJK – VUB-onderzoek redt levens in het verkeer dankzij innovatieve dodehoeksensoren (knack.be)).

FWO granted the project Exploiting plasma etching processes for micro/nanotexturing of metal surfaces to enable novel chemical, analytical, optical, and medical applications.

The plasma metal etcher will be installed in the core facility MICROLAB. The project execution will be coordinated by Prof. Wim de Malsche (CHIS) with two ETRO-promoters Prof. Johan Stiens and Prof. Peter Schelkens. Step by step the microfabrication facilities are growing, allowing to dive deeper again in microfabrication related projects.

https://vub.sharepoint.com/sites/PUB_PhD/SitePages/Activities,-language,-publishing-and-travel-grants-for-PhD-candidates.aspx

Requests are collected and evaluated at the harvesting moments:

  • 1 February
  • 1 April
  • 1 June
  • 1 October
  • 1 December

PhD candidates will be informed (about) one week after the respective harvest moment.

An Innoviris-funded project, called eTailor with Treedy’s  and ETRO-VUB for full-body scanner that extrapolates your size without you even having to take your clothes off. It will be deployed in the Decathlon shops (and not only) worldwide.

The IP behind this tech is for a part shared IP between VUB and Treedy’s. A VUB-Treedy’s patent was recently accepted covering the technology that enables estimating the body shape under clothing and taking automatically measurements on each scanned person.

 eTailor is an example of how an industrial project should run: one could achieve both academic and industrial excellence.

Brussels Times,

L’Echo,

Solar Impulse,

Knack,

La Libre

Prof. Dr. Em. Roger Vounckx gave his last lecture last Friday 29 March 2024.

Many thanks for all the insights you gave us at ETRO, the students and VUB. Both in science and in many other topics.

Cheers!

Some ETRO staff went to the recent gala ball organised by the engineering students association (PK). We also know how to partyyyy!

At the recent FTI open campus day the ETRO demos caught a lot of attention as the pictures show. A wide range of visitors were fascinated by discovering the brain from the inside to the maker engineer asking very technical questions and the Flemish Minister Benjamin Dalle was also present.

After the conservation and restoration project for Jan Van Eyck’s masterpiece, the Royal Institute for Cultural Heritage documented both sides of the painting with hundreds of macro photography photos. Universum Digitalis then algorithmically assembled those images to produce gigapixel images of the artwork. The painting was previously digitized in 2015 using the same scientific protocol. Universum Digitalis seamlessly aligned both acquisitions, enabling a unique pixel-level comparison before and after restoration.

Comparison of the front and backside before and after restoration.

The restored painting will be exhibited at The Louvre Museum during the exhibition “Revoir Van Eyck – La Vierge du chancelier Rolin” from March 20th to June 17th, 2024. In parallel with the exhibition’s opening, the gigapixel images produced by Universum Digitalis will be made publicly accessible on the Closer to Van Eyck website.

https://www.louvre.fr/en/what-s-on/exhibitions/a-new-look-at-jan-van-eyck

https://closertovaneyck.be

After two years of dedicated research and development under the leadership of ETRO-VUB, a breakthrough has been achieved within the INTOWALL project: a revolutionary radar technology for building inspection was developed, called the transient radar method (TRM). The initiative aimed to reduce the CO2 emissions of buildings and increase their energy efficiency.

The new technology enables the measurement of the density of glass wool in cavity walls with unprecedented precision, without the need for invasive methods. “This advancement not only promises to improve the accuracy of insulation assessments but also contributes to the ambition to achieve a CO2-neutral status by 2050,” says Professor Johan Stiens of ETRO.

Looking towards the future, the project team is focused on further refining the technology to map a wide range of insulation materials and building elements. This prospect of expansion and application on a larger scale highlights the endless possibilities. As part of the FTI Brussels Festival, the milestone of the INTOWALL project will be celebrated. A unique demonstration was held on March 18, 2024.

Additionally, the project team invites potential partners to contribute to and participate in this groundbreaking endeavour. Through collaboration, we can transform the construction sector into a more sustainable and efficient future.

For more information on InToWall press articles: https://press.vub.ac.be/wereldprimeur-in-radartechnologie and https://trends.knack.be/kanaal-z/z-nieuws/bekijk-radar-van-vub-ziet-isolatie-dwars-door-muren-heen/

On March 29th 2024 at 17::00, Jurgen Vandendriessche will defend their PhD entitled “TOWARDS SMART ACOUSTIC CAMERAS FOR SIMULTANEOUS SOUND LOCALIZATION AND RECOGNITION”.

Everybody is invited to attend the presentation in room I.0.01, or digitally via this link.

Abstract 

Acoustic cameras are devices that visualize sound by utilizing an array of microphones. The signal from each microphone is combined using a beamforming algorithm to generate an acoustic heatmap or acoustic image. These beamforming algorithms tend to have a high computational cost, which increases with the number of microphones. The combination of a high number of Input/Output (I/O) requirements for the microphones combined with the high amount of parallel computations makes Field Programmable Gate Arrays (FPGAs) very suitable for processing the signals from these microphone arrays. FPGAs have a low power consumption, which makes them especially viable when targeting battery powered devices such as handheld acoustic cameras or nodes in a sensor network. Despite the high computational power per watt of FPGAs, satisfying real time scenarios still present a challenge, especially when targeting acoustic images with a higher resolution. To overcome this challenge, a multi-mode acoustic camera has been developed. The camera supports multiple modes depending on the task at hand. To satisfy the real time requirement for each mode, the resolution of the acoustic heatmap can be adapted. A second limitation of the existing acoustic cameras is the identification of the type of sound, which commonly requires human expertise to recognize and profile the sound.

In recent years, deep learning, a form of Artificial intelligence (AI), has shown promising results towards the task of sound recognition by using Convolutional Neural Networks (CNNs). However, most of the research focuses on improving the accuracy of such models without considering the limitations one encounters when deploying such a model on resource constrained devices such as FPGAs. FPGAs are used nowadays for embedding deep learning inference, mainly using two architectures. One type of architecture uses a general-purpose soft-core inside the Programmable Logic (PL) of the FPGA. On the other hand, there are also dataflow-based architectures that translate each layer in a CNN to a functional block in the PL. Embedding these CNNs for inference on FPGAs is not a trivial task and come with trade-offs in terms of resource consumption, accuracy, supported layers,… These two architectures are compared against other embedded solutions such as Google’s edge Tensor Processing Unit (TPU) and a Raspberry Pi (RPi) to find the best fit for acoustic cameras. Acoustic cameras are targeted in this instance because they can identify the location of a sound source, which is not possible when using one microphone. Furthermore, existing beamforming techniques such as delay-and-sum reconstruct audio signals, which can be used for audio classification tasks.

On March 29th 2024 at 16::00, Lucas Santana will defend their PhD entitled “TOWARDS UNCHARTED TERRITORIES: HIGH-PERFORMANCE AND HIGH-BANDWIDTH RINGAMP-BASED DELTA-SIGMA ADCs”.

Everybody is invited to attend the presentation in room D.2.01, or digitally via this link.

Abstract 

Analog-to-digital (ADC) research often happens in an agnostic detachment from the intended application; although motivation is sometimes presented, it is not always implemented with the proposed prototype. Advancements in ADC linearity and speed enable applications that were nonexistent before to emerge, such as direct RF conversion and 8k camera recording. Most ADC architectures cover all regions of the performance space, being at the forefront of the state-of-the-art for some areas and not so much for others. This high coverage enables the use of the advantages and peculiarities of different architectures across different applications. One notable architecture that does not perform this is the Discrete Time (DT) Delta-Sigma Modulator (DSM) ADC, in which the published state-of-the-art bandwidth front is limited to 20 MHz. This work investigates this limitation, showing that it can be overcome with high-efficiency ring amplifiers (ringamps) and the correct design process. This work presents a prototype for a single loop 3rd-order DT DSM ADC based on ringamps for the loop filter that could double the bandwidth reached by DT DSM ADC at 47.5 MHz and achieve 67 dB of signal-tonoise and distortion ratio (SNDR) when clocked at 950 MHz. It also shows outstanding figures of merit (FoM): the Schreier FoM, FoMs is 167 dB and the Walden FoM, FoMw is 27 fJ per conversion step. The second prototype used time interleaving to improve the sampling rate and bandwidth further and used a noise-coupled (NC) noiseshaping (NS) SAR quantizer to enable aliased noise suppression. It achieved 1.4 GS/s of sampling rate, a decimated bandwidth of 70 MHz at a peak SNDR of 67 dB, with a power consumption of 32 mW; this translated to a FoMs of 160 dB and a FoMw of 143 fJ/c.s. Both prototypes were the first to pave the way to increase the bandwidth in DT DSM ADC efficiently and can still benefit from recent developments in ringamps and noise-shaping SAR ADCs, leading the architecture to conquer even more space in this uncharted territory.




FAQ for the Master Biomedical Engineering

“Signal Processing in the AI era” was the tagline of this year’s IEEE International Conference on Acoustics, Speech and Signal Processing, taking place in Rhodes, Greece.

In this context, Brent de Weerdt, Xiangyu Yang, Boris Joukovsky, Alex Stergiou and Nikos Deligiannis presented ETRO’s research during poster sessions and oral presentations, with novel ways to process and understand graph, video, and audio data. Nikos Deligiannis chaired a session on Graph Deep Learning, attended the IEEE T-IP Editorial Board Meeting, and had the opportunity to meet with collaborators from the VUB-Duke-Ugent-UCL joint lab.

Featured articles:

Happy kids visited the ETRO Build your climate-proof LEGO city boot at  CurieuCity and it was also broadcasted on Bruzz tv this weekend.

https://curieucity.brussels/nl/build-your-climate-resistant-city-of-the-future/

ETRO’s spin-off Exia was extensively in the picture today on the national news. The sensor technology can detect objects or persons no matter of their size and avoid deadly accidents with trucks and busses.

Well done!

https://www.vrt.be/vrtnws/nl/kijk/2024/05/02/dodehoekongevallen-repo-arvato-65749764/

BEKIJK – VUB-onderzoek redt levens in het verkeer dankzij innovatieve dodehoeksensoren (knack.be)).

FWO granted the project Exploiting plasma etching processes for micro/nanotexturing of metal surfaces to enable novel chemical, analytical, optical, and medical applications.

The plasma metal etcher will be installed in the core facility MICROLAB. The project execution will be coordinated by Prof. Wim de Malsche (CHIS) with two ETRO-promoters Prof. Johan Stiens and Prof. Peter Schelkens. Step by step the microfabrication facilities are growing, allowing to dive deeper again in microfabrication related projects.

https://vub.sharepoint.com/sites/PUB_PhD/SitePages/Activities,-language,-publishing-and-travel-grants-for-PhD-candidates.aspx

Requests are collected and evaluated at the harvesting moments:

  • 1 February
  • 1 April
  • 1 June
  • 1 October
  • 1 December

PhD candidates will be informed (about) one week after the respective harvest moment.

An Innoviris-funded project, called eTailor with Treedy’s  and ETRO-VUB for full-body scanner that extrapolates your size without you even having to take your clothes off. It will be deployed in the Decathlon shops (and not only) worldwide.

The IP behind this tech is for a part shared IP between VUB and Treedy’s. A VUB-Treedy’s patent was recently accepted covering the technology that enables estimating the body shape under clothing and taking automatically measurements on each scanned person.

 eTailor is an example of how an industrial project should run: one could achieve both academic and industrial excellence.

Brussels Times,

L’Echo,

Solar Impulse,

Knack,

La Libre

Prof. Dr. Em. Roger Vounckx gave his last lecture last Friday 29 March 2024.

Many thanks for all the insights you gave us at ETRO, the students and VUB. Both in science and in many other topics.

Cheers!

Some ETRO staff went to the recent gala ball organised by the engineering students association (PK). We also know how to partyyyy!

At the recent FTI open campus day the ETRO demos caught a lot of attention as the pictures show. A wide range of visitors were fascinated by discovering the brain from the inside to the maker engineer asking very technical questions and the Flemish Minister Benjamin Dalle was also present.

After the conservation and restoration project for Jan Van Eyck’s masterpiece, the Royal Institute for Cultural Heritage documented both sides of the painting with hundreds of macro photography photos. Universum Digitalis then algorithmically assembled those images to produce gigapixel images of the artwork. The painting was previously digitized in 2015 using the same scientific protocol. Universum Digitalis seamlessly aligned both acquisitions, enabling a unique pixel-level comparison before and after restoration.

Comparison of the front and backside before and after restoration.

The restored painting will be exhibited at The Louvre Museum during the exhibition “Revoir Van Eyck – La Vierge du chancelier Rolin” from March 20th to June 17th, 2024. In parallel with the exhibition’s opening, the gigapixel images produced by Universum Digitalis will be made publicly accessible on the Closer to Van Eyck website.

https://www.louvre.fr/en/what-s-on/exhibitions/a-new-look-at-jan-van-eyck

https://closertovaneyck.be

After two years of dedicated research and development under the leadership of ETRO-VUB, a breakthrough has been achieved within the INTOWALL project: a revolutionary radar technology for building inspection was developed, called the transient radar method (TRM). The initiative aimed to reduce the CO2 emissions of buildings and increase their energy efficiency.

The new technology enables the measurement of the density of glass wool in cavity walls with unprecedented precision, without the need for invasive methods. “This advancement not only promises to improve the accuracy of insulation assessments but also contributes to the ambition to achieve a CO2-neutral status by 2050,” says Professor Johan Stiens of ETRO.

Looking towards the future, the project team is focused on further refining the technology to map a wide range of insulation materials and building elements. This prospect of expansion and application on a larger scale highlights the endless possibilities. As part of the FTI Brussels Festival, the milestone of the INTOWALL project will be celebrated. A unique demonstration was held on March 18, 2024.

Additionally, the project team invites potential partners to contribute to and participate in this groundbreaking endeavour. Through collaboration, we can transform the construction sector into a more sustainable and efficient future.

For more information on InToWall press articles: https://press.vub.ac.be/wereldprimeur-in-radartechnologie and https://trends.knack.be/kanaal-z/z-nieuws/bekijk-radar-van-vub-ziet-isolatie-dwars-door-muren-heen/

On March 29th 2024 at 17::00, Jurgen Vandendriessche will defend their PhD entitled “TOWARDS SMART ACOUSTIC CAMERAS FOR SIMULTANEOUS SOUND LOCALIZATION AND RECOGNITION”.

Everybody is invited to attend the presentation in room I.0.01, or digitally via this link.

Abstract 

Acoustic cameras are devices that visualize sound by utilizing an array of microphones. The signal from each microphone is combined using a beamforming algorithm to generate an acoustic heatmap or acoustic image. These beamforming algorithms tend to have a high computational cost, which increases with the number of microphones. The combination of a high number of Input/Output (I/O) requirements for the microphones combined with the high amount of parallel computations makes Field Programmable Gate Arrays (FPGAs) very suitable for processing the signals from these microphone arrays. FPGAs have a low power consumption, which makes them especially viable when targeting battery powered devices such as handheld acoustic cameras or nodes in a sensor network. Despite the high computational power per watt of FPGAs, satisfying real time scenarios still present a challenge, especially when targeting acoustic images with a higher resolution. To overcome this challenge, a multi-mode acoustic camera has been developed. The camera supports multiple modes depending on the task at hand. To satisfy the real time requirement for each mode, the resolution of the acoustic heatmap can be adapted. A second limitation of the existing acoustic cameras is the identification of the type of sound, which commonly requires human expertise to recognize and profile the sound.

In recent years, deep learning, a form of Artificial intelligence (AI), has shown promising results towards the task of sound recognition by using Convolutional Neural Networks (CNNs). However, most of the research focuses on improving the accuracy of such models without considering the limitations one encounters when deploying such a model on resource constrained devices such as FPGAs. FPGAs are used nowadays for embedding deep learning inference, mainly using two architectures. One type of architecture uses a general-purpose soft-core inside the Programmable Logic (PL) of the FPGA. On the other hand, there are also dataflow-based architectures that translate each layer in a CNN to a functional block in the PL. Embedding these CNNs for inference on FPGAs is not a trivial task and come with trade-offs in terms of resource consumption, accuracy, supported layers,… These two architectures are compared against other embedded solutions such as Google’s edge Tensor Processing Unit (TPU) and a Raspberry Pi (RPi) to find the best fit for acoustic cameras. Acoustic cameras are targeted in this instance because they can identify the location of a sound source, which is not possible when using one microphone. Furthermore, existing beamforming techniques such as delay-and-sum reconstruct audio signals, which can be used for audio classification tasks.

On March 29th 2024 at 16::00, Lucas Santana will defend their PhD entitled “TOWARDS UNCHARTED TERRITORIES: HIGH-PERFORMANCE AND HIGH-BANDWIDTH RINGAMP-BASED DELTA-SIGMA ADCs”.

Everybody is invited to attend the presentation in room D.2.01, or digitally via this link.

Abstract 

Analog-to-digital (ADC) research often happens in an agnostic detachment from the intended application; although motivation is sometimes presented, it is not always implemented with the proposed prototype. Advancements in ADC linearity and speed enable applications that were nonexistent before to emerge, such as direct RF conversion and 8k camera recording. Most ADC architectures cover all regions of the performance space, being at the forefront of the state-of-the-art for some areas and not so much for others. This high coverage enables the use of the advantages and peculiarities of different architectures across different applications. One notable architecture that does not perform this is the Discrete Time (DT) Delta-Sigma Modulator (DSM) ADC, in which the published state-of-the-art bandwidth front is limited to 20 MHz. This work investigates this limitation, showing that it can be overcome with high-efficiency ring amplifiers (ringamps) and the correct design process. This work presents a prototype for a single loop 3rd-order DT DSM ADC based on ringamps for the loop filter that could double the bandwidth reached by DT DSM ADC at 47.5 MHz and achieve 67 dB of signal-tonoise and distortion ratio (SNDR) when clocked at 950 MHz. It also shows outstanding figures of merit (FoM): the Schreier FoM, FoMs is 167 dB and the Walden FoM, FoMw is 27 fJ per conversion step. The second prototype used time interleaving to improve the sampling rate and bandwidth further and used a noise-coupled (NC) noiseshaping (NS) SAR quantizer to enable aliased noise suppression. It achieved 1.4 GS/s of sampling rate, a decimated bandwidth of 70 MHz at a peak SNDR of 67 dB, with a power consumption of 32 mW; this translated to a FoMs of 160 dB and a FoMw of 143 fJ/c.s. Both prototypes were the first to pave the way to increase the bandwidth in DT DSM ADC efficiently and can still benefit from recent developments in ringamps and noise-shaping SAR ADCs, leading the architecture to conquer even more space in this uncharted territory.