P3-2: GD-Retriever: Controllable generative text-music retrieval with diffusion models
Julien Guinot, Elio Quinton, George Fazekas
Subjects: Multimodality ; Music retrieval systems ; Generative Tasks ; Interactions ; Applications ; Open Review ; MIR fundamentals and methodology
Presented In-person
4-minute short-format presentation
Multimodal contrastive models have achieved strong performance in text-audio retrieval and zero-shot settings, but improving joint embedding spaces remains an active research area. Less attention has been given to making these systems controllable and interactive for users. In text-music retrieval, the ambiguity of freeform language creates a many-to-many mapping, often resulting in inflexible or unsatisfying results.
We introduce Generative Diffusion Retriever (GDR), a novel framework that leverages diffusion models to generate queries in a retrieval-optimized latent space. This enables controllability through generative tools such as negative prompting and denoising diffusion implicit models (DDIM) inversion, opening a new direction in retrieval control. GDR improves retrieval performance over contrastive teacher models and supports retrieval in audio-only latent spaces using non-jointly trained encoders. Finally, we demonstrate that GDR enables effective post-hoc manipulation of retrieval behavior, enhancing interactive control for text-music retrieval tasks.
Q2 ( I am an expert on the topic of the paper.)
Strongly agree
Q3 ( The title and abstract reflect the content of the paper.)
Strongly agree
Q4 (The paper discusses, cites and compares with all relevant related work.)
Strongly agree
Q6 (Readability and paper organization: The writing and language are clear and structured in a logical manner.)
Agree
Q7 (The paper adheres to ISMIR 2025 submission guidelines (uses the ISMIR 2025 template, has at most 6 pages of technical content followed by “n” pages of references or ethical considerations, references are well formatted). If you selected “No”, please explain the issue in your comments.)
Yes
Q8 (Relevance of the topic to ISMIR: The topic of the paper is relevant to the ISMIR community. Note that submissions of novel music-related topics, tasks, and applications are highly encouraged. If you think that the paper has merit but does not exactly match the topics of ISMIR, please do not simply reject the paper but instead communicate this to the Program Committee Chairs. Please do not penalize the paper when the proposed method can also be applied to non-music domains if it is shown to be useful in music domains.)
Strongly agree
Q9 (Scholarly/scientific quality: The content is scientifically correct.)
Strongly agree
Q11 (Novelty of the paper: The paper provides novel methods, applications, findings or results. Please do not narrowly view "novelty" as only new methods or theories. Papers proposing novel musical applications of existing methods from other research fields are considered novel at ISMIR conferences.)
Strongly agree
Q12 (The paper provides all the necessary details or material to reproduce the results described in the paper. Keep in mind that ISMIR respects the diversity of academic disciplines, backgrounds, and approaches. Although ISMIR has a tradition of publishing open datasets and open-source projects to enhance the scientific reproducibility, ISMIR accepts submissions using proprietary datasets and implementations that are not sharable. Please do not simply reject the paper when proprietary datasets or implementations are used.)
Agree
Q13 (Pioneering proposals: This paper proposes a novel topic, task or application. Since this is intended to encourage brave new ideas and challenges, papers rated “Strongly Agree” and “Agree” can be highlighted, but please do not penalize papers rated “Disagree” or “Strongly Disagree”. Keep in mind that it is often difficult to provide baseline comparisons for novel topics, tasks, or applications. If you think that the novelty is high but the evaluation is weak, please do not simply reject the paper but carefully assess the value of the paper for the community.)
Strongly Agree (Very novel topic, task, or application)
Q14 (Reusable insights: The paper provides reusable insights (i.e. the capacity to gain an accurate and deep understanding). Such insights may go beyond the scope of the paper, domain or application, in order to build up consistent knowledge across the MIR community.)
Strongly agree
Q15 (Please explain your assessment of reusable insights in the paper.)
The application of diffusion models for retrieval is very interesting, specially to possibility of doing negative or refined queries.
Q16 ( Write ONE line (in your own words) with the main take-home message from the paper.)
Generative models using diffusion can be leveraged for music retrieval from text, and have the ability to perform negative and refined queries.
Q17 (This paper is of award-winning quality.)
No
Q19 (Potential to generate discourse: The paper will generate discourse at the ISMIR conference or have a large influence/impact on the future of the ISMIR community.)
Strongly agree
Q20 (Overall evaluation (to be completed before the discussion phase): Please first evaluate before the discussion phase. Keep in mind that minor flaws can be corrected, and should not be a reason to reject a paper. Please familiarize yourself with the reviewer guidelines at https://ismir.net/reviewer-guidelines.)
Strong accept
Q21 (Main review and comments for the authors (to be completed before the discussion phase). Please summarize strengths and weaknesses of the paper. It is essential that you justify the reason for the overall evaluation score in detail. Keep in mind that belittling or sarcastic comments are not appropriate.)
This paper presents an approach for multimodal music retrieval that instead of training a multimodal joint model, it uses a generative diffusion-based model to translate embeddings obtained from a text encoder to the space of audio embeddings. The particularity is that instead of one embeddings, this generative approach generates several embeddings that better represent non-exact queries. The major contribution to the field of music retrieval is the possibility of adding conditioning in the generative model to perform operations like negative queries, or refine previous queries. This is very relevant in real-world retrieval systems. The paper is well written and explained and the evaluation is comprehensive. I have some minor comments: - When the authors refers to sequence of embeddings it is not clear if they are referring to a sequence of dimensions, or a set of embeddings that correspond to different parts of a song. This should be better explained. - There is a missing capital letter at the beginning of the sentence in line 132. - The authors argue that this approach avoids the training of a multimodal model. However there is just a substitution of a contrastive learning model by this diffusion model that translates the embeddings from the text modality to the audio modality. Therefore there is no simplification, the model needs to be trained on embeddings from both modalities, in the same way contrastive learning multimodal approaches can be trained with frozen encoders. This claim should be toned down accordingly. - In the paragraph starting in line 160 a description of what is the difference between z and Z would be useful. Here when you say time-wise average pooling you realize that maybe sequence of embeddings are related to different parts of a track, but not before. - Authors uses MULE baseline, they say that the approach was reimplemented in the MTG Jamendo dataset. Does it mean they trained the unsupervised approach described in the reference paper or that they used the released model and computed the embeddings of the Jamendo dataset? - A little description of what is out-of-domain for the authors would be useful. - Table 2 would benefit of a reordering. Having the teacher model and the GD retriever one next to the other would help to visualize the discussion. I mean having CLAP and right after GDE-CLAP for example. - Add a short description of the CLAP scores, so the reader doesn't have to check the source paper. - I think the part of negative queries and query conditioning is the most interesting part of the paper, a deeper dive in this section would have been nice, and also a discussion about future work in this direction.
Q22 (Final recommendation (to be completed after the discussion phase) Please give a final recommendation after the discussion phase. In the final recommendation, please do not simply average the scores of the reviewers. Note that the number of recommendation options for reviewers is different from the number of options here. We encourage you to take a stand, and preferably avoid “weak accepts” or “weak rejects” if possible.)
Accept
Q23 (Meta-review and final comments for authors (to be completed after the discussion phase))
The reviewers agree about the novelty and relevance of the work. The method is clearly explained, and the experimental results are convincing. The idea of leveraging pre-trained modality-specific encoders while avoiding joint training is appreciated, though as one reviewer noted, the claim that this “avoids multimodal training” might be too strong, since the model still learns mappings between modalities. This point should be toned down accordingly.
Another common point in the reviews concerns the controllability aspect. Several reviewers, including myself, find this to be one of the most interesting features of the paper. However, the current experiments and analysis around it are relatively limited. A deeper exploration — either through qualitative examples or user-facing use cases — would significantly strengthen this part of the work.
There were also comments about generalization, particularly regarding the reliance on the PrivateCaps dataset. While the use of private data is acceptable within ISMIR guidelines, the limited evaluation on public datasets makes it harder to assess reproducibility and broader applicability. Clarifying these points and discussing future directions to address domain mismatch would be helpful.
Finally, the paper would benefit from some minor edits and clarifications, including:
A clearer explanation of what is meant by "sequence of embeddings"
More details about how baselines like MULE were implemented
Better organization in tables (especially Table 2)
Minor corrections in grammar and table labels
Overall, this is a solid and timely contribution that opens up new directions in controllable music retrieval. Despite the noted limitations, I support acceptance of this paper and look forward to seeing further developments on this line of work.
Q2 ( I am an expert on the topic of the paper.)
Agree
Q3 (The title and abstract reflect the content of the paper.)
Agree
Q4 (The paper discusses, cites and compares with all relevant related work)
Agree
Q6 (Readability and paper organization: The writing and language are clear and structured in a logical manner.)
Agree
Q7 (The paper adheres to ISMIR 2025 submission guidelines (uses the ISMIR 2025 template, has at most 6 pages of technical content followed by “n” pages of references or ethical considerations, references are well formatted). If you selected “No”, please explain the issue in your comments.)
Yes
Q8 (Relevance of the topic to ISMIR: The topic of the paper is relevant to the ISMIR community. Note that submissions of novel music-related topics, tasks, and applications are highly encouraged. If you think that the paper has merit but does not exactly match the topics of ISMIR, please do not simply reject the paper but instead communicate this to the Program Committee Chairs. Please do not penalize the paper when the proposed method can also be applied to non-music domains if it is shown to be useful in music domains.)
Agree
Q9 (Scholarly/scientific quality: The content is scientifically correct.)
Agree
Q11 (Novelty of the paper: The paper provides novel methods, applications, findings or results. Please do not narrowly view "novelty" as only new methods or theories. Papers proposing novel musical applications of existing methods from other research fields are considered novel at ISMIR conferences.)
Agree
Q12 (The paper provides all the necessary details or material to reproduce the results described in the paper. Keep in mind that ISMIR respects the diversity of academic disciplines, backgrounds, and approaches. Although ISMIR has a tradition of publishing open datasets and open-source projects to enhance the scientific reproducibility, ISMIR accepts submissions using proprietary datasets and implementations that are not sharable. Please do not simply reject the paper when proprietary datasets or implementations are used.)
Agree
Q13 (Pioneering proposals: This paper proposes a novel topic, task or application. Since this is intended to encourage brave new ideas and challenges, papers rated "Strongly Agree" and "Agree" can be highlighted, but please do not penalize papers rated "Disagree" or "Strongly Disagree". Keep in mind that it is often difficult to provide baseline comparisons for novel topics, tasks, or applications. If you think that the novelty is high but the evaluation is weak, please do not simply reject the paper but carefully assess the value of the paper for the community.)
Agree (Novel topic, task, or application)
Q14 (Reusable insights: The paper provides reusable insights (i.e. the capacity to gain an accurate and deep understanding). Such insights may go beyond the scope of the paper, domain or application, in order to build up consistent knowledge across the MIR community.)
Agree
Q15 (Please explain your assessment of reusable insights in the paper.)
By using the GD-RETRIEVER architecture, two additional benefits are gained: (1) the ability to operate within an audio-only latent space that is not jointly trained with text, and (2) the flexibility to support arbitrary text encoders for conditioning.
Q16 (Write ONE line (in your own words) with the main take-home message from the paper.)
This paper proposes GD-RETRIEVER, a method that generates controllable queries using a diffusion model for music retrieval based on text queries.
Q17 (Would you recommend this paper for an award?)
No
Q19 (Potential to generate discourse: The paper will generate discourse at the ISMIR conference or have a large influence/impact on the future of the ISMIR community.)
Agree
Q20 (Overall evaluation: Keep in mind that minor flaws can be corrected, and should not be a reason to reject a paper. Please familiarize yourself with the reviewer guidelines at https://ismir.net/reviewer-guidelines)
Weak accept
Q21 (Main review and comments for the authors. Please summarize strengths and weaknesses of the paper. It is essential that you justify the reason for the overall evaluation score in detail. Keep in mind that belittling or sarcastic comments are not appropriate.)
This paper proposes Generative Diffusion Retriever (GD-RETRIEVER), which applies diffusion models to focus on the important challenge of controllability in text-to-music retrieval, making it a pioneering work in the field. The main contribution, adapting generative model control techniques such as negative prompting and DDIM inversion to retrieval, is interesting, and the idea of enabling interactive search experiences for users is commendable. Furthermore, the flexibility to utilize encoders that are not jointly trained is an advantage.
However, there are several concerns regarding the proposed method. Most notably, retrieval performance varies between in-domain data (PrivateCaps) and out-of-domain data (MusicCaps). Although the paper attributes this to domain mismatch and proposes latent space alignment as a mitigation strategy, analyzing such mismatch for each model is not practical in real-world scenarios.
Another concern is that the main training results rely heavily on a private dataset (PrivateCaps). While this is permitted under ISMIR’s policy, it limits the ability of other researchers to reproduce or verify the results. It also remains unclear whether the trends observed with PrivateCaps hold when evaluated solely on public datasets.
Although the controllability of the model is evaluated, including some quantitative analyses using CLAP scores, the lack of user studies assessing how effective or intuitive these control features are from a user perspective is unfortunate. Even simply illustrating the example retrieval results that can be performed in real-world scenarios would serve as a strong validation of the usefulness of the proposed method.
These concerns, especially those regarding generalizability and reproducibility, somewhat weaken the overall impact of the paper. Nevertheless, the novel direction of controllable retrieval, the creative use of diffusion models for retrieval tasks, and the thorough analyses (on domain mismatch and query quality) make this a valuable contribution. Future work is expected to address domain mismatch more comprehensively and to strengthen evaluation on public datasets.
There are also a few typographical errors: * Lines 214–216: punctuation (period placement) * Table 6: “NPP” should be “PNP”, etc.
Q2 ( I am an expert on the topic of the paper.)
Agree
Q3 (The title and abstract reflect the content of the paper.)
Agree
Q4 (The paper discusses, cites and compares with all relevant related work)
Agree
Q6 (Readability and paper organization: The writing and language are clear and structured in a logical manner.)
Agree
Q7 (The paper adheres to ISMIR 2025 submission guidelines (uses the ISMIR 2025 template, has at most 6 pages of technical content followed by “n” pages of references or ethical considerations, references are well formatted). If you selected “No”, please explain the issue in your comments.)
Yes
Q8 (Relevance of the topic to ISMIR: The topic of the paper is relevant to the ISMIR community. Note that submissions of novel music-related topics, tasks, and applications are highly encouraged. If you think that the paper has merit but does not exactly match the topics of ISMIR, please do not simply reject the paper but instead communicate this to the Program Committee Chairs. Please do not penalize the paper when the proposed method can also be applied to non-music domains if it is shown to be useful in music domains.)
Agree
Q9 (Scholarly/scientific quality: The content is scientifically correct.)
Agree
Q11 (Novelty of the paper: The paper provides novel methods, applications, findings or results. Please do not narrowly view "novelty" as only new methods or theories. Papers proposing novel musical applications of existing methods from other research fields are considered novel at ISMIR conferences.)
Agree
Q12 (The paper provides all the necessary details or material to reproduce the results described in the paper. Keep in mind that ISMIR respects the diversity of academic disciplines, backgrounds, and approaches. Although ISMIR has a tradition of publishing open datasets and open-source projects to enhance the scientific reproducibility, ISMIR accepts submissions using proprietary datasets and implementations that are not sharable. Please do not simply reject the paper when proprietary datasets or implementations are used.)
Agree
Q13 (Pioneering proposals: This paper proposes a novel topic, task or application. Since this is intended to encourage brave new ideas and challenges, papers rated "Strongly Agree" and "Agree" can be highlighted, but please do not penalize papers rated "Disagree" or "Strongly Disagree". Keep in mind that it is often difficult to provide baseline comparisons for novel topics, tasks, or applications. If you think that the novelty is high but the evaluation is weak, please do not simply reject the paper but carefully assess the value of the paper for the community.)
Disagree (Standard topic, task, or application)
Q14 (Reusable insights: The paper provides reusable insights (i.e. the capacity to gain an accurate and deep understanding). Such insights may go beyond the scope of the paper, domain or application, in order to build up consistent knowledge across the MIR community.)
Agree
Q15 (Please explain your assessment of reusable insights in the paper.)
Text-to-music retrieval can be done using diffusion model.
Q16 (Write ONE line (in your own words) with the main take-home message from the paper.)
Diffusion model can generate query key for text-to-music retrieval.
Q17 (Would you recommend this paper for an award?)
No
Q19 (Potential to generate discourse: The paper will generate discourse at the ISMIR conference or have a large influence/impact on the future of the ISMIR community.)
Disagree
Q20 (Overall evaluation: Keep in mind that minor flaws can be corrected, and should not be a reason to reject a paper. Please familiarize yourself with the reviewer guidelines at https://ismir.net/reviewer-guidelines)
Weak accept
Q21 (Main review and comments for the authors. Please summarize strengths and weaknesses of the paper. It is essential that you justify the reason for the overall evaluation score in detail. Keep in mind that belittling or sarcastic comments are not appropriate.)
The paper proposed to use diffusion model for text-to-music retrieval task not the audio generation. The methodology is to train a diffusion model to generate audio embeddings using text conditioning. To verify the effectiveness of the proposed method, the authors first verified that whether the proposed approach is superior than CLAP like text-audio joint embedding model-based text-to-music retrieval. If we see the Table 2, we can see that the proposed method is superior than CLAP like models. However, since they utilized CLAP like joint embedding model for their text conditioning and audio embeddings, there exists some performance degrades due to incompleteness of the joint embedding models. Therefore, they used separated models for text conditioning and audio embeddings, then in Table 4, this problem was solved well. If we take a step back and look at the proposed method again, then it can be viewed as a simple regressor. So, the model is trained to predict audio embeddings using texts, and they retrieve songs based on the predicted audio embeddings. Therefore, the authors compared the proposed method with a simple regression method + simpler diffusion model. And, they verified that the proposed method outperforms regressions and simpler method. At last, since the proposed method is a diffusion model, they could apply several techniques like negative prompting and DDIM inversion for more controllable retrieval. Overall, the paper is well-written and the experiments verified me the concerns I've had while reading paper (especially regression experiment seems really nice to have).
This is an additional comments that the authors can think of. Since the model is trained on caption-audio pairs using diffusion model, I'm curious about how much this model works well on really simple tag-based retrieval cases. For example, if the user types "hiphop" then would the model works well? I think if the authors can evaluate the proposed model on tag-based retrieval task (using well established tag dataset) and compare the pros and cons compared to the CLAP like model, then it would give many insights to the readers further. (Even though the performance got worse in this scenario, there would be many lessons that the readers can receive from this experimentation)
Q2 ( I am an expert on the topic of the paper.)
Agree
Q3 (The title and abstract reflect the content of the paper.)
Agree
Q4 (The paper discusses, cites and compares with all relevant related work)
Agree
Q5 (Please justify the previous choice (Required if “Strongly Disagree” or “Disagree” is chosen, otherwise write "n/a"))
The paper cites most foundational work in audio contrastive learning (CLAP, MuLan, MusCALL), and key prior art in generative audio/music diffusion based approaches. It could lightly expand on recent symbolic-control frameworks, but overall coverage is strong. One optional addition could be AudioCLIP or Mousai, for completeness in audio-text diffusion and representation baselines, though their omission doesn’t materially weaken the paper.
Q6 (Readability and paper organization: The writing and language are clear and structured in a logical manner.)
Agree
Q7 (The paper adheres to ISMIR 2025 submission guidelines (uses the ISMIR 2025 template, has at most 6 pages of technical content followed by “n” pages of references or ethical considerations, references are well formatted). If you selected “No”, please explain the issue in your comments.)
Yes
Q8 (Relevance of the topic to ISMIR: The topic of the paper is relevant to the ISMIR community. Note that submissions of novel music-related topics, tasks, and applications are highly encouraged. If you think that the paper has merit but does not exactly match the topics of ISMIR, please do not simply reject the paper but instead communicate this to the Program Committee Chairs. Please do not penalize the paper when the proposed method can also be applied to non-music domains if it is shown to be useful in music domains.)
Strongly agree
Q9 (Scholarly/scientific quality: The content is scientifically correct.)
Agree
Q11 (Novelty of the paper: The paper provides novel methods, applications, findings or results. Please do not narrowly view "novelty" as only new methods or theories. Papers proposing novel musical applications of existing methods from other research fields are considered novel at ISMIR conferences.)
Agree
Q12 (The paper provides all the necessary details or material to reproduce the results described in the paper. Keep in mind that ISMIR respects the diversity of academic disciplines, backgrounds, and approaches. Although ISMIR has a tradition of publishing open datasets and open-source projects to enhance the scientific reproducibility, ISMIR accepts submissions using proprietary datasets and implementations that are not sharable. Please do not simply reject the paper when proprietary datasets or implementations are used.)
Agree
Q13 (Pioneering proposals: This paper proposes a novel topic, task or application. Since this is intended to encourage brave new ideas and challenges, papers rated "Strongly Agree" and "Agree" can be highlighted, but please do not penalize papers rated "Disagree" or "Strongly Disagree". Keep in mind that it is often difficult to provide baseline comparisons for novel topics, tasks, or applications. If you think that the novelty is high but the evaluation is weak, please do not simply reject the paper but carefully assess the value of the paper for the community.)
Agree (Novel topic, task, or application)
Q14 (Reusable insights: The paper provides reusable insights (i.e. the capacity to gain an accurate and deep understanding). Such insights may go beyond the scope of the paper, domain or application, in order to build up consistent knowledge across the MIR community.)
Strongly agree
Q15 (Please explain your assessment of reusable insights in the paper.)
Insights are highly reusable both in terms of scholarly/scientific relevance but also in terms of potential real-world applications. Having extensive experience in both retrieval-based and generative-based paradigms for music creation, the work outlined here can be taken further on a number of fronts, including:
-
Better approaches for enabling cross-modality leveraging existing -single modality- pre-trained models: as more and more single modality pre-trained models become available via open source, it is highly relevant to devise more effective ways and better practices to unlock cross-modality between independently trained models, including ways to deal distributions mismatches.
-
Hybrid systems that seamlessly blend generation and retrieval. For example, given a text query, the system could first perform retrieval to assess whether an existing candidate satisfies the query with high accuracy; otherwise, it could go ahead and generate a new relevant output.
-
Hybrid systems that could use retrieval in order to identify and inject better/enhanced audio conditioning into generative models.
-
Going deeper into retrieval controllability. Controllability remains a huge open area of research particularly in generative models. It is one of the key areas to improve upon for enabling better human-in-the-loop mechanics for iterative creation. Continued investment in highly semantic and arithmetic-friendly controllability in both retrieval and generation settings (including hybrid systems like the one mentioned in the previous point) remains quite relevant.
Q16 (Write ONE line (in your own words) with the main take-home message from the paper.)
The paper introduces GD-Retriever, a diffusion-based framework that enables controllable and interactive text-to-music retrieval by generating latent queries without requiring joint text-audio training.
Q17 (Would you recommend this paper for an award?)
No
Q19 (Potential to generate discourse: The paper will generate discourse at the ISMIR conference or have a large influence/impact on the future of the ISMIR community.)
Agree
Q20 (Overall evaluation: Keep in mind that minor flaws can be corrected, and should not be a reason to reject a paper. Please familiarize yourself with the reviewer guidelines at https://ismir.net/reviewer-guidelines)
Weak accept
Q21 (Main review and comments for the authors. Please summarize strengths and weaknesses of the paper. It is essential that you justify the reason for the overall evaluation score in detail. Keep in mind that belittling or sarcastic comments are not appropriate.)
-
Highly relevant work overall.
-
Well introduced and well explained approach and justification. Clear writing.
-
As mentioned in section 15, it's a topic that can inspire and branch out into multiple adjacent and follow up explorations and use cases.
-
minor observation: in your conclusion you mention "...uses diffusion models to produce latent queries in retrieval-optimized spaces." While not necessarily incorrect, I'd frame this more like "retrieval-relevant or retrieval-friendly spaces" rather than "optimized". "optimized" sounds a bit strong or at least, a) I thought I'd see something related to the optimization of the retrieval space itself and/or b) it left me wanting to see a justification as to why these spaces are already optimal in a retrieval setting.
-
minor omission: "Figure 2: GD Retriever Method: We train a model to generate text-conditioned ghost queries for retrieval. Left: A diffusion model is trained to generate audio [LATENTS] from text captions. Right: Using the frozen model, we generate audio embeddings from a caption to retrieve similar audio via ghost queries." -> you're missing the term latents (or something similar, maybe "embeddings"), otherwise it reads as if you are generating the actually audio output.
-
My reason for weak accept and not strong accept is mainly related to the Controllability section. The emphasis on the overall claim is centered around Controllability. While the authors did carry experiments with negative prompting and DDIM inversion and provided some results and metrics, for this to be a strong accept I would have needed more in-depth experiments and more clear evidence on controllable retrieval behavior under different settings, including examples and potentially demos. Within the current scope of the experiments, controllability in retrieval seems promising but not conclusive enough to declare it a robust or preferred approach for controllable retrieval, compared to other methods.