播放使用Java API音MP3

编程入门 行业动态 更新时间:2024-10-27 02:18:56
本文介绍了播放使用Java API音MP3的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

您可以请建议,我怎么能写的片的,播放一首歌曲。?

我试过下面的代码片段,但我得到了这个异常:

进口sun.audio *。进口java.io. *;类测试仪{ 公共静态无效的主要(字符串ARGS [])抛出异常{  在的InputStream =新的FileInputStream(tester.mp3);  语音串流为=新语音串流(中);  AudioPlayer.player.start(如); }}

解决方案

正如前面提到的,Java的声音不支持MP3默认。对于类型它支持在任何特定的JRE,检查 AudioSystem.getAudioFileTypes()。

补充读取MP3支持的一种方法是添加基于JMF mp3plugin.jar 1 应用程序的运行时类路径。

  • 该链接被称为是小于完全可靠。该罐也可在我的软件共享驱动器。
  • 至于你的实际问题,而 javax.sound.sampled.Clip中似乎非常适合这样的任务,它不幸将只持有立体声,16位,44.1KHz的声音的第二位。这就是为什么我开发的 BigClip 。 (已与循环自身的问题,如果你解决这些问题,回来报告。)

    包org.ps code.xui.sound.bigclip;进口java.awt.Component中;进口的javax.swing *。// J2SE 1.3进口javax.sound.sampled中*。进口java.io. *;// J2SE 1.4导入java.util.logging的*。进口java.util.Arrays中;/ ** javax.sound.sampled.Clip中的的一个实现,它被设计处理任意大小的剪辑,通过内存量仅限于提供给应用程序。它采用了1.4后行为的线程(守护线程)主要退出后,将停止声音的运行。< UL><立GT; 2012-07-24 - 在字节数组的大小固定的臭虫(2 ^ 16 - >(INT)Math.pow(2,16))。<立GT; 2009-09-01 - 修正了已经剪辑..clipped到了最后,通过调用漏()(前呼吁停止())上的播放循环之后的数据线完成。改进框架和微秒位置确定。<立GT; 2009-08-17 - 补充说,接受一个剪辑便捷构造。改变了私人从微观到毫convertFrameToM..seconds方法来反映,他们正在处理与第二的1000 /日的单位。<立GT; 2009-08-14 - 摆脱了完善的循环之后的flush(),因为它被切断轨道只是年底前,和被发现不需要的快进/快退功能,它在推出了支持。<立GT; 2009-08-11 - 第一二进制版本。< / UL>N.B.删除@覆盖记号和记录在1.3+使用@since 1.5@version 2009-08-17@author安德鲁·汤普森* /公共类BigClip实现夹,LineListener {    / **此剪辑所使用的的DataLine。 * /    私人SourceDataLine的数据线;    / **音频数据的原始字节。 * /    私人字节[] audioData;    / **流包装的audioData。 * /    私人ByteArrayInputStream进行的InputStream;    / **循环次数由调用code设置。 * /    私人诠释loopCount;    / **多少圈走内部计数。 * /    私人诠释countDown方法;    / **循环点的开始。默认为0 * /    私人诠释loopPointStart;    / **循环点结束。默认为剪辑的末尾。 * /    私人诠释loopPointEnd;    / **店剪辑的当前帧的位置。 * /    私人诠释framePosition;    / **用于运行()的声音主题。 * /    私人螺纹螺纹;    / **不管声音正在播放或活性。 * /    私人布尔活跃;    / **店最后一次字节被倾倒到音频流。 * /    私人长期timelastPositionSet;    私人INT bufferUpdateFactor = 2;    / **用于加载进度对​​话框父组件。 * /    组件父= NULL;    / **用于报告消息。 * /    私人记录器记录= Logger.getAnonymousLogger();    / **默认构造一个BigClip。什么也没做。从信息    在开放的AudioInputStream传递()将被用于获得适当的SourceDataLine。 * /    公共BigClip(){}    / **有许多的AudioSystem方法将返回一个配置剪辑。这个    便捷构造使我们能够获得一个使用BigClip一个的SourceDataLine    同样作为AudioFormat的原始素材。    @param夹夹用来配置BigClip剪辑。 * /    公共BigClip(剪辑片段)抛出LineUnavailableException {        数据线= AudioSystem.getSourceDataLine(clip.getFormat());    }    / **提供此剪辑的整个音频缓冲区。    @return audioData字节[]是在该剪辑加载的音频数据的字节。 * /    公众的byte [] getAudioData(){        返回audioData;    }    / **设置一个父组件充当所有者装载轨道..进度对话框。    如果为null,会出现显示任何进展。 * /    公共无效setParentComponent(组件父){        this.parent =父母;    }    / **转换帧计数以毫秒为单位。 * /    私人长期convertFramesToMilliseconds(INT帧){        回报(帧/(长)dataLine.getFormat()getSampleRate()。)* 1000;    }    / **以毫秒为单位的帧计数的持续时间转换。 * /    私人诠释convertMillisecondsToFrames(长毫秒){        回报(INT)(毫秒/ dataLine.getFormat()getSampleRate());    }    @覆盖    公共无效更新(LineEvent乐){        logger.log(Level.FINEST,更新:+乐);    }    @覆盖    公共无效循环(诠释计数){        logger.log(Level.FINEST,循环(+数+) - framePosition:+ framePosition);        loopCount =计数;        countDown方法=计数;        积极=真实的;        inputStream.reset();        开始();    }    @覆盖    公共无效setLoopPoints(INT开始,诠释完){        如果(            开始℃的||            开始> audioData.length-1 ||            结束℃的||            结束> audioData.length            ){            抛出新抛出:IllegalArgumentException(                循环点'+                启动+                '和'+                结束+                不能为大小的缓冲器被置为+                audioData.length);        }        如果(开始>结束){            抛出新抛出:IllegalArgumentException(                最终地位+                结束+                preceeds起始位置+启动);        }        loopPointStart =启动;        framePosition = loopPointStart;        loopPointEnd =结束;    }    @覆盖    公共无效setMicrosecondPosition(长毫秒){        framePosition = convertMillisecondsToFrames(毫秒);    }    @覆盖    众长getMicrosecondPosition(){        返回convertFramesToMilliseconds(getFramePosition());    }    @覆盖    众长getMicrosecondLength(){        返回convertFramesToMilliseconds(getFrameLength());    }    @覆盖    公共无效setFramePosition(INT帧){        framePosition =帧;        的int偏移= framePosition * format.getFrameSize();        尝试{            inputStream.reset();            inputStream.read(新字节[偏移]);        }赶上(例外五){            e.printStackTrace();        }    }    @覆盖    公众诠释getFramePosition(){        长timeSinceLastPositionSet = System.currentTimeMillis的() - timelastPositionSet;        INT大小= dataLine.getBufferSize()*(format.getChannels()/ 2)/ bufferUpdateFactor;        INT framesSinceLast =(INT)((timeSinceLastPositionSet / 1000F)*            。dataLine.getFormat()getFrameRate());        INT framesRemainingTillTime =大小 - framesSinceLast;        返回framePosition             - framesRemainingTillTime;    }    @覆盖    公众诠释getFrameLength(){        返回audioData.length / format.getFrameSize();    }    AudioFormat的格式;    @覆盖    公共无效开放(AudioInputStream stream)使用抛出        IOException异常,        LineUnavailableException {        的AudioInputStream IS1;        格式= stream.getFormat();        如果(format.getEncoding()!= AudioFormat.Encoding.PCM_SIGNED){            IS1 = AudioSystem.getAudioInputStream(                AudioFormat.Encoding.PCM_SIGNED,流);        }其他{            IS1 =流;        }        格式= is1.getFormat();        InputStream的IS2;        如果(父!= NULL){            ProgressMonitorInputStream PMIS =新ProgressMonitorInputStream(                父母,                装载轨道..                IS1);            pmis.getProgressMonitor()setMillisToPopup(0)。            IS2 = PMIS;        }其他{            IS2 = IS1;        }        字节[] buf中=新的字节[(INT)Math.pow(2,16);        INT totalRead = 0;        INT numRead = 0;        ByteArrayOutputStream BAOS =新ByteArrayOutputStream();        numRead = is2.read(BUF);        而(numRead -1个){            baos.write(BUF,0,numRead);            numRead = is2.read(BUF,0,buf.length);            totalRead + = numRead;        }        is2.close();        audioData = baos.toByteArray();        AudioFormat的afTemp;        如果(format.getChannels()2){            afTemp =新AudioFormat的(                format.getEncoding(),                format.getSampleRate(),                format.getSampleSizeInBits(),                2,                format.getSampleSizeInBits()* 2/8,//计算帧大小                format.getFrameRate(),                format.isBigEndian()                );        }其他{            afTemp =格式;        }        setLoopPoints(0,audioData.length);        数据线= AudioSystem.getSourceDataLine(afTemp);        dataLine.open();        的InputStream =新ByteArrayInputStream进行(audioData);    }    @覆盖    公共无效开放(AudioFormat的格式,        字节[]数据,        INT抵消,        INT缓冲区大小)        抛出LineUnavailableException {        字节[] =输入新的字节[缓冲区大小]        为(中间体二= 0; II蛋白酶input.length;ⅱ++){            输入[II] =数据[偏移+ II]        }        ByteArrayInputStream进行的InputStream =新ByteArrayInputStream的(输入);        尝试{            AIS1的AudioInputStream = AudioSystem.getAudioInputStream(InputStream的);            的AudioInputStream AIS2 = AudioSystem.getAudioInputStream(格式,AIS1);            开(AIS2);        }赶上(UnsupportedAudioFileException uafe){            抛出新抛出:IllegalArgumentException(uafe);        }赶上(IOException异常IOE){            抛出新抛出:IllegalArgumentException(IOE);        }        // TODO - 扔IAE为无效帧大小,格式。    }    @覆盖    公众持股量getLevel(){        返回dataLine.getLevel();    }    @覆盖    众长getLongFramePosition(){        返回dataLine.getLongFramePosition()* 2 / format.getChannels();    }    @覆盖    公众诠释提供(){        返回dataLine.available();    }    @覆盖    公众诠释getBufferSize(){        返回dataLine.getBufferSize();    }    @覆盖    公共AudioFormat的的getFormat(){        返回格式;    }    @覆盖    公共布尔isActive(){        返回dataLine.isActive();    }    @覆盖    公共布尔isRunning(){        返回dataLine.isRunning();    }    @覆盖    公共布尔isOpen会(){        返回dataLine.isOpen();    }    @覆盖    公共无效停止(){        logger.log(Level.FINESTBigClip.stop());        积极= FALSE;        //为什么我有这样的评论吗?        dataLine.stop();        如果(线程!= NULL){            尝试{                积极= FALSE;                的Thread.join();            }赶上(InterruptedException的wakeAndContinue){            }        }    }    公众的byte [] convertMonoToStereo(字节[]的数据,诠释读取动作){        字节[] = TempData的新的字节[读取动作* 2];        如果(format.getSampleSizeInBits()== 8){            为(中间体二= 0; II蛋白酶读取动作;ⅱ++){                字节B =数据[II]                TempData的[II * 2] = B;                TempData的[II * 2 + 1] = B;            }        }其他{            为(中间体二= 0; II蛋白酶读取动作-1;ⅱ+ = 2){                //字节B2 = is2.read();                字节B1 =数据[Ⅱ];                字节B2 =数据[II + 1];                TempData的[II * 2] = B1;                TempData的[II * 2 + 1] = B2;                TempData的[II * 2 + 2] = B1;                TempData的[II * 2 + 3] = B2;            }        }        返回TempData的;    }    布尔快进;    布尔fastRewind;    公共无效setFastForward(布尔快进){        logger.log(Level.FINEST,快进快进+);        this.fastForward =快进;        fastRewind = FALSE;        冲洗();    }    公共布尔getFastForward(){        返回快进;    }    公共无效setFastRewind(布尔fastRewind){        logger.log(Level.FINESTFastRewind+ fastRewind);        this.fastRewind = fastRewind;        快进= FALSE;        冲洗();    }    公共布尔getFastRewind(){        返回fastRewind;    }    / ** TODO - 修复的bug LOOP_CONTINUOUSLY * /    @覆盖    公共无效的start(){        可运行R =新的Runnable(){            公共无效的run(){                尝试{                    / *如果这些的open()/ close()方法调用在这里,或明确                    由用户程序调用?线路的Javadoc建议                    剪辑应该抛出IllegalArgumentException,让我们拭目以待                    坚持这一点,并明确调用它。 * /                    dataLine.open();                    dataLine.start();                    INT读取动作= 0;                    INT框架尺寸= dataLine.getFormat()getFrameSize()。                    INT由bufSize = dataLine.getBufferSize();                    布尔startOrMove =真;                    字节[]数据=新的字节[由bufSize]                    诠释偏移量= framePosition *框架尺寸;                    INT totalBytes =偏移;                    inputStream.read(新字节[偏移],0,偏移);                    logger.log(Level.FINESTloopCount+ loopCount);                    而((读取动作= inputStream.read(数据,0,data.length))                        != -1&放大器;&安培;                        (loopCount == || Clip.LOOP_CONTINUOUSLY                        countDown方法大于0)及&放大器;                        活动){                        logger.log(Level.FINEST,                            BigClip.start()循环+ framePosition);                        totalBytes + =读取动作;                        INT framesRead;                        字节[]的TempData;                        如果(format.getChannels()2){                            TempData的= convertMonoToStereo(数据读取动作);                            framesRead =读取动作/                                format.getFrameSize();                            读取动作* = 2;                        }其他{                            framesRead =读取动作/                                。dataLine.getFormat()getFrameSize();                            TempData的= Arrays.copyOfRange(数据,0,读取动作);                        }                        framePosition + = framesRead;                        如果(framePosition> = loopPointEnd){                            framePosition = loopPointStart;                            inputStream.reset();                            倒数 - ;                            logger.log(Level.FINEST,                                循环计数:+ countDown方法);                        }                        timelastPositionSet = System.currentTimeMillis的();                        字节[] newData;                        如果(快进){                            newData = getEveryNthFrame(TempData的,2);                        }否则如果(fastRewind){                            字节[] TEMP = getEveryNthFrame(TempData的,2);                            newData = reverseFrames(临时);                            inputStream.reset();                            totalBytes - = 2 *读取动作;                        framePosition - = 2 * framesRead;                            如果(totalBytes℃,){                                setFastRewind(假);                                totalBytes = 0;                            }                            inputStream.skip(totalBytes);                            logger.log(Level.INFOtotalBytes+ totalBytes);                        }其他{                            newData = TempData的;                        }                        dataLine.write(newData,0,newData.length);                        如果(startOrMove){                            数据=新的字节[由bufSize /                                bufferUpdateFactor];                            startOrMove = FALSE;                        }                    }                    logger.log(Level.FINEST,                        BigClip.start()循环无止境+ framePosition);                    积极= FALSE;                    dataLine.drain();                    dataLine.stop();                    / *应将这些的open()/ close()方法在这里,或明确                    由用户程序调用? * /                    dataLine.close();                }赶上(LineUnavailableException略){                    logger.log(Level.SEVERE,                        没有可用声线!,略);                    如果(父!= NULL){                        JOptionPane.showMessageDialog(                            父母,                            清声线继续                            没有可用的音频线!                            JOptionPane.ERROR_MESSAGE);                    }                }            }        };        线程=新主题(R);        //使用JavaSound后1.4兼容线程行为        thread.setDaemon(真);        thread.start();    }    / **假设帧大小为4 * /    公众的byte [] reverseFrames(字节[]数据){        字节[] =逆转新的字节[data.length]        字节[]框=新的字节[4];        为(中间体二= 0; II蛋白酶data.length / 4;ⅱ++){            INT第一=(data.length) - ((二+ 1)* 4)+ 0;            INT最后=(data.length) - ((二+ 1)* 4)+3;            框架[0] =数据[首页];            帧[1] =数据[(data.length) - ((二+ 1)* 4)+1];            帧[2] =数据[(data.length) - ((二+ 1)* 4)+2];            框架[3] =数据[最后]            逆转[II * 4 + 0] =帧[0];            逆转[II * 4 + 1] =帧[1];            逆转[II * 4 + 2] =帧[2];            逆转[II * 4 + 3] =帧[3];            如果(ⅱ小于5 ||二>(data.length / 4)-5){                logger.log(Level.FINER,从\\ t+第一+\\ tlast+即止);                logger.log(Level.FINER,到\\ t+((II * 4)+0)+​​\\ tlast+((II * 4)+3));            }        }/ *        为(中间体二= 0; II蛋白酶data.length;ⅱ++){            逆转[Ⅱ] =数据[data.length-1-ⅱ];        }* /        返回逆转;    }    / **假设帧大小为4 * /    公众的byte [] getEveryNthFrame(字节[]数据,INT跳过){        INT长度= data.length /跳过;        长度=(长度/ 4)* 4;        logger.log(Level.FINEST,长度+ data.length +\\ t+长度);        字节[] B =新的字节[长度]        //字节[]框=新的字节[4];        为(中间体二= 0; II蛋白酶b.length个/ 4;ⅱ++){            B〔II * 4 + 0] =数据[II * *跳过4 + 0];            B〔II * 4 + 1] =数据[II * *跳过4 + 1];            B〔二* 4 + 2] =数据[Ⅱ*跳过* 4 + 2];            B〔II * 4 + 3] =数据[II * *跳过4 + 3];        }        返回b;    }    @覆盖    公共无效的flush(){        dataLine.flush();    }    @覆盖    公共无效漏(){        dataLine.drain();    }    @覆盖    公共无效removeLineListener(LineListener监听){        dataLine.removeLineListener(监听);    }    @覆盖    公共无效addLineListener(LineListener监听){        dataLine.addLineListener(监听);    }    @覆盖    公共控制getControl(Control.Type控制){        返回dataLine.getControl(对照组);    }    @覆盖    公共控制[]的GetControls(){        如果(数据线== NULL){            返回新的控制[0];        }其他{            返回dataLine.getControls();        }    }    @覆盖    公共布尔isControlSupported(Control.Type控制){        返回dataLine.isControlSupported(对照组);    }    @覆盖    公共无效的close(){        dataLine.close();    }    @覆盖    公共无效的open()抛出LineUnavailableException {        抛出新抛出:IllegalArgumentException(非法调用接口夹打开());    }    @覆盖    公共Line.Info getLineInfo(){        返回dataLine.getLineInfo();    }    / **确定当前剪辑的所有渠道的单一最大样本量。    这是很方便的确定一小部分财政再可视presentations。    @返回双0安培之间; 1重presenting任何信道的最大信号电平。 * /    公共双getLargestSampleSize(){        诠释最大= 0;        INT电流;        布尔签订=(format.getEncoding()== AudioFormat.Encoding.PCM_SIGNED);        INT位深度= format.getSampleSizeInBits();        布尔大尾端= format.isBigEndian();        INT样品= audioData.length * 8 /位深度;        如果(签字){            如果(位深度/ 8 == 2){                如果(大尾端){                    为(中间体CC = 0; cc的下;样本;立方厘米++){                        电流=(audioData [CC * 2] * 256 +(audioData [CC * 2 + 1]安培;为0xFF));                        如果(Math.abs(电流)GT;最大){                            最大= Math.abs(电流);                        }                    }                }其他{                    为(中间体CC = 0; cc的下;样本;立方厘米++){                        电流=(audioData [CC * 2 + 1] * 256 +(audioData [CC * 2]&安培;为0xFF));                        如果(Math.abs(电流)GT;最大){                            最大= Math.abs(电流);                        }                    }                }            }其他{                为(中间体CC = 0; cc的下;样本;立方厘米++){                    电流=(audioData [CC&安培; 0xFF的);                    如果(Math.abs(电流)GT;最大){                        最大= Math.abs(电流);                    }                }            }        }其他{            如果(位深度/ 8 == 2){                如果(大尾端){                    为(中间体CC = 0; cc的下;样本;立方厘米++){                        电流=(audioData [CC * 2] * 256 +(audioData [CC * 2 + 1] - 0x80的));                        如果(Math.abs(电流)GT;最大){                            最大= Math.abs(电流);                        }                    }                }其他{                    为(中间体CC = 0; cc的下;样本;立方厘米++){                        电流=(audioData [CC * 2 + 1] * 256 +(audioData [CC * 2] - 0x80的));                        如果(Math.abs(电流)GT;最大){                            最大= Math.abs(电流);                        }                    }                }            }其他{                为(中间体CC = 0; cc的下;样本;立方厘米++){                    如果(audioData [CC]&0){                        电流=(audioData [CC] - 0x80的);                        如果(Math.abs(电流)GT;最大){                            最大= Math.abs(电流);                        }                    }其他{                        电流=(audioData [CC] + 0x80的);                        如果(Math.abs(电流)GT;最大){                            最大= Math.abs(电流);                        }                    }                }            }        }        // audioData        logger.log(Level.FINEST,最大信号电平:+(双)最大/(Math.pow(2,位深度-1)));        返回(双)最大/(Math.pow(2,位深度-1));    }}

    Can you please suggest that how can i write a piece that plays a song.?

    I tried the following snippet but i get the this exception:

    import sun.audio.*; import java.io.*; class tester { public static void main(String args[]) throws Exception { InputStream in=new FileInputStream("tester.mp3"); AudioStream as=new AudioStream(in); AudioPlayer.player.start(as); } }

    解决方案

    As mentioned, Java Sound does not support MP3 by default. For the types it does support in any specific JRE, check AudioSystem.getAudioFileTypes().

    One way to add support for reading MP3 is to add the JMF based mp3plugin.jar1 to the application's run-time class-path.

  • That link is known to be less than entirely reliable. The Jar is also available at my software share drive.
  • As to your actual question, while a javax.sound.sampled.Clip might seem ideal for this kind of task, it unfortunately will only hold a second of stereo, 16 bit, 44.1KHz sound. That is why I developed BigClip. (Which has its own problems with looping, if you fix them, report back.)

    package org.pscode.xui.sound.bigclip; import java.awt.Component; import javax.swing.*; // J2SE 1.3 import javax.sound.sampled.*; import java.io.*; // J2SE 1.4 import java.util.logging.*; import java.util.Arrays; /** An implementation of the javax.sound.sampled.Clip that is designed to handle Clips of arbitrary size, limited only by the amount of memory available to the app. It uses the post 1.4 thread behaviour (daemon thread) that will stop the sound running after the main has exited. <ul> <li>2012-07-24 - Fixed bug in size of byte array (2^16 -> (int)Math.pow(2, 16)). <li>2009-09-01 - Fixed bug that had clip ..clipped at the end, by calling drain() (before calling stop()) on the dataline after the play loop was complete. Improvement to frame and microsecond position determination. <li>2009-08-17 - added convenience constructor that accepts a Clip. Changed the private convertFrameToM..seconds methods from 'micro' to 'milli' to reflect that they were dealing with units of 1000/th of a second. <li>2009-08-14 - got rid of flush() after the sound loop, as it was cutting off tracks just before the end, and was found to be not needed for the fast-forward/rewind functionality it was introduced to support. <li>2009-08-11 - First binary release. </ul> N.B. Remove @Override notation and logging to use in 1.3+ @since 1.5 @version 2009-08-17 @author Andrew Thompson */ public class BigClip implements Clip, LineListener { /** The DataLine used by this Clip. */ private SourceDataLine dataLine; /** The raw bytes of the audio data. */ private byte[] audioData; /** The stream wrapper for the audioData. */ private ByteArrayInputStream inputStream; /** Loop count set by the calling code. */ private int loopCount; /** Internal count of how many loops to go. */ private int countDown; /** The start of a loop point. Defaults to 0. */ private int loopPointStart; /** The end of a loop point. Defaults to the end of the Clip. */ private int loopPointEnd; /** Stores the current frame position of the clip. */ private int framePosition; /** Thread used to run() sound. */ private Thread thread; /** Whether the sound is currently playing or active. */ private boolean active; /** Stores the last time bytes were dumped to the audio stream. */ private long timelastPositionSet; private int bufferUpdateFactor = 2; /** The parent Component for the loading progress dialog. */ Component parent = null; /** Used for reporting messages. */ private Logger logger = Logger.getAnonymousLogger(); /** Default constructor for a BigClip. Does nothing. Information from the AudioInputStream passed in open() will be used to get an appropriate SourceDataLine. */ public BigClip() {} /** There are a number of AudioSystem methods that will return a configured Clip. This convenience constructor allows us to obtain a SourceDataLine for the BigClip that uses the same AudioFormat as the original Clip. @param clip Clip The Clip used to configure the BigClip. */ public BigClip(Clip clip) throws LineUnavailableException { dataLine = AudioSystem.getSourceDataLine( clip.getFormat() ); } /** Provides the entire audio buffer of this clip. @return audioData byte[] The bytes of the audio data that is loaded in this Clip. */ public byte[] getAudioData() { return audioData; } /** Sets a parent component to act as owner of a "Loading track.." progress dialog. If null, there will be no progress shown. */ public void setParentComponent(Component parent) { this.parent = parent; } /** Converts a frame count to a duration in milliseconds. */ private long convertFramesToMilliseconds(int frames) { return (frames/(long)dataLine.getFormat().getSampleRate())*1000; } /** Converts a duration in milliseconds to a frame count. */ private int convertMillisecondsToFrames(long milliseconds) { return (int)(milliseconds/dataLine.getFormat().getSampleRate()); } @Override public void update(LineEvent le) { logger.log(Level.FINEST, "update: " + le ); } @Override public void loop(int count) { logger.log(Level.FINEST, "loop(" + count + ") - framePosition: " + framePosition); loopCount = count; countDown = count; active = true; inputStream.reset(); start(); } @Override public void setLoopPoints(int start, int end) { if ( start<0 || start>audioData.length-1 || end<0 || end>audioData.length ) { throw new IllegalArgumentException( "Loop points '" + start + "' and '" + end + "' cannot be set for buffer of size " + audioData.length); } if (start>end) { throw new IllegalArgumentException( "End position " + end + " preceeds start position " + start); } loopPointStart = start; framePosition = loopPointStart; loopPointEnd = end; } @Override public void setMicrosecondPosition(long milliseconds) { framePosition = convertMillisecondsToFrames(milliseconds); } @Override public long getMicrosecondPosition() { return convertFramesToMilliseconds(getFramePosition()); } @Override public long getMicrosecondLength() { return convertFramesToMilliseconds(getFrameLength()); } @Override public void setFramePosition(int frames) { framePosition = frames; int offset = framePosition*format.getFrameSize(); try { inputStream.reset(); inputStream.read(new byte[offset]); } catch(Exception e) { e.printStackTrace(); } } @Override public int getFramePosition() { long timeSinceLastPositionSet = System.currentTimeMillis() - timelastPositionSet; int size = dataLine.getBufferSize()*(format.getChannels()/2)/bufferUpdateFactor; int framesSinceLast = (int)((timeSinceLastPositionSet/1000f)* dataLine.getFormat().getFrameRate()); int framesRemainingTillTime = size - framesSinceLast; return framePosition - framesRemainingTillTime; } @Override public int getFrameLength() { return audioData.length/format.getFrameSize(); } AudioFormat format; @Override public void open(AudioInputStream stream) throws IOException, LineUnavailableException { AudioInputStream is1; format = stream.getFormat(); if (format.getEncoding()!=AudioFormat.Encoding.PCM_SIGNED) { is1 = AudioSystem.getAudioInputStream( AudioFormat.Encoding.PCM_SIGNED, stream ); } else { is1 = stream; } format = is1.getFormat(); InputStream is2; if (parent!=null) { ProgressMonitorInputStream pmis = new ProgressMonitorInputStream( parent, "Loading track..", is1); pmis.getProgressMonitor().setMillisToPopup(0); is2 = pmis; } else { is2 = is1; } byte[] buf = new byte[ (int)Math.pow(2, 16) ]; int totalRead = 0; int numRead = 0; ByteArrayOutputStream baos = new ByteArrayOutputStream(); numRead = is2.read( buf ); while (numRead>-1) { baos.write( buf, 0, numRead ); numRead = is2.read( buf, 0, buf.length ); totalRead += numRead; } is2.close(); audioData = baos.toByteArray(); AudioFormat afTemp; if (format.getChannels()<2) { afTemp = new AudioFormat( format.getEncoding(), format.getSampleRate(), format.getSampleSizeInBits(), 2, format.getSampleSizeInBits()*2/8, // calculate frame size format.getFrameRate(), format.isBigEndian() ); } else { afTemp = format; } setLoopPoints(0,audioData.length); dataLine = AudioSystem.getSourceDataLine(afTemp); dataLine.open(); inputStream = new ByteArrayInputStream( audioData ); } @Override public void open(AudioFormat format, byte[] data, int offset, int bufferSize) throws LineUnavailableException { byte[] input = new byte[bufferSize]; for (int ii=0; ii<input.length; ii++) { input[ii] = data[offset+ii]; } ByteArrayInputStream inputStream = new ByteArrayInputStream(input); try { AudioInputStream ais1 = AudioSystem.getAudioInputStream(inputStream); AudioInputStream ais2 = AudioSystem.getAudioInputStream(format, ais1); open(ais2); } catch( UnsupportedAudioFileException uafe ) { throw new IllegalArgumentException(uafe); } catch( IOException ioe ) { throw new IllegalArgumentException(ioe); } // TODO - throw IAE for invalid frame size, format. } @Override public float getLevel() { return dataLine.getLevel(); } @Override public long getLongFramePosition() { return dataLine.getLongFramePosition()*2/format.getChannels(); } @Override public int available() { return dataLine.available(); } @Override public int getBufferSize() { return dataLine.getBufferSize(); } @Override public AudioFormat getFormat() { return format; } @Override public boolean isActive() { return dataLine.isActive(); } @Override public boolean isRunning() { return dataLine.isRunning(); } @Override public boolean isOpen() { return dataLine.isOpen(); } @Override public void stop() { logger.log(Level.FINEST, "BigClip.stop()"); active = false; // why did I have this commented out? dataLine.stop(); if (thread!=null) { try { active = false; thread.join(); } catch(InterruptedException wakeAndContinue) { } } } public byte[] convertMonoToStereo(byte[] data, int bytesRead) { byte[] tempData = new byte[bytesRead*2]; if (format.getSampleSizeInBits()==8) { for(int ii=0; ii<bytesRead; ii++) { byte b = data[ii]; tempData[ii*2] = b; tempData[ii*2+1] = b; } } else { for(int ii=0; ii<bytesRead-1; ii+=2) { //byte b2 = is2.read(); byte b1 = data[ii]; byte b2 = data[ii+1]; tempData[ii*2] = b1; tempData[ii*2+1] = b2; tempData[ii*2+2] = b1; tempData[ii*2+3] = b2; } } return tempData; } boolean fastForward; boolean fastRewind; public void setFastForward(boolean fastForward) { logger.log(Level.FINEST, "FastForward " + fastForward); this.fastForward = fastForward; fastRewind = false; flush(); } public boolean getFastForward() { return fastForward; } public void setFastRewind(boolean fastRewind) { logger.log(Level.FINEST, "FastRewind " + fastRewind); this.fastRewind = fastRewind; fastForward = false; flush(); } public boolean getFastRewind() { return fastRewind; } /** TODO - fix bug in LOOP_CONTINUOUSLY */ @Override public void start() { Runnable r = new Runnable() { public void run() { try { /* Should these open()/close() calls be here, or explicitly called by user program? The JavaDocs for line suggest that Clip should throw an IllegalArgumentException, so we'll stick with that and call it explicitly. */ dataLine.open(); dataLine.start(); int bytesRead = 0; int frameSize = dataLine.getFormat().getFrameSize(); int bufSize = dataLine.getBufferSize(); boolean startOrMove = true; byte[] data = new byte[bufSize]; int offset = framePosition*frameSize; int totalBytes = offset; inputStream.read(new byte[offset], 0, offset); logger.log(Level.FINEST, "loopCount " + loopCount ); while ((bytesRead = inputStream.read(data,0,data.length)) != -1 && (loopCount==Clip.LOOP_CONTINUOUSLY || countDown>0) && active ) { logger.log(Level.FINEST, "BigClip.start() loop " + framePosition ); totalBytes += bytesRead; int framesRead; byte[] tempData; if (format.getChannels()<2) { tempData = convertMonoToStereo(data, bytesRead); framesRead = bytesRead/ format.getFrameSize(); bytesRead*=2; } else { framesRead = bytesRead/ dataLine.getFormat().getFrameSize(); tempData = Arrays.copyOfRange(data, 0, bytesRead); } framePosition += framesRead; if (framePosition>=loopPointEnd) { framePosition = loopPointStart; inputStream.reset(); countDown--; logger.log(Level.FINEST, "Loop Count: " + countDown ); } timelastPositionSet = System.currentTimeMillis(); byte[] newData; if (fastForward) { newData = getEveryNthFrame(tempData, 2); } else if (fastRewind) { byte[] temp = getEveryNthFrame(tempData, 2); newData = reverseFrames(temp); inputStream.reset(); totalBytes -= 2*bytesRead; framePosition -= 2*framesRead; if (totalBytes<0) { setFastRewind(false); totalBytes = 0; } inputStream.skip(totalBytes); logger.log(Level.INFO, "totalBytes " + totalBytes); } else { newData = tempData; } dataLine.write(newData, 0, newData.length); if (startOrMove) { data = new byte[bufSize/ bufferUpdateFactor]; startOrMove = false; } } logger.log(Level.FINEST, "BigClip.start() loop ENDED" + framePosition ); active = false; dataLine.drain(); dataLine.stop(); /* should these open()/close() be here, or explicitly called by user program? */ dataLine.close(); } catch (LineUnavailableException lue) { logger.log( Level.SEVERE, "No sound line available!", lue ); if (parent!=null) { JOptionPane.showMessageDialog( parent, "Clear the sound lines to proceed", "No audio lines available!", JOptionPane.ERROR_MESSAGE); } } } }; thread= new Thread(r); // makes thread behaviour compatible with JavaSound post 1.4 thread.setDaemon(true); thread.start(); } /** Assume the frame size is 4. */ public byte[] reverseFrames(byte[] data) { byte[] reversed = new byte[data.length]; byte[] frame = new byte[4]; for (int ii=0; ii<data.length/4; ii++) { int first = (data.length)-((ii+1)*4)+0; int last = (data.length)-((ii+1)*4)+3; frame[0] = data[first]; frame[1] = data[(data.length)-((ii+1)*4)+1]; frame[2] = data[(data.length)-((ii+1)*4)+2]; frame[3] = data[last]; reversed[ii*4+0] = frame[0]; reversed[ii*4+1] = frame[1]; reversed[ii*4+2] = frame[2]; reversed[ii*4+3] = frame[3]; if (ii<5 || ii>(data.length/4)-5) { logger.log(Level.FINER, "From \t" + first + " \tlast " + last ); logger.log(Level.FINER, "To \t" + ((ii*4)+0) + " \tlast " + ((ii*4)+3) ); } } /* for (int ii=0; ii<data.length; ii++) { reversed[ii] = data[data.length-1-ii]; } */ return reversed; } /** Assume the frame size is 4. */ public byte[] getEveryNthFrame(byte[] data, int skip) { int length = data.length/skip; length = (length/4)*4; logger.log(Level.FINEST, "length " + data.length + " \t" + length); byte[] b = new byte[length]; //byte[] frame = new byte[4]; for (int ii=0; ii<b.length/4; ii++) { b[ii*4+0] = data[ii*skip*4+0]; b[ii*4+1] = data[ii*skip*4+1]; b[ii*4+2] = data[ii*skip*4+2]; b[ii*4+3] = data[ii*skip*4+3]; } return b; } @Override public void flush() { dataLine.flush(); } @Override public void drain() { dataLine.drain(); } @Override public void removeLineListener(LineListener listener) { dataLine.removeLineListener(listener); } @Override public void addLineListener(LineListener listener) { dataLine.addLineListener(listener); } @Override public Control getControl(Control.Type control) { return dataLine.getControl(control); } @Override public Control[] getControls() { if (dataLine==null) { return new Control[0]; } else { return dataLine.getControls(); } } @Override public boolean isControlSupported(Control.Type control) { return dataLine.isControlSupported(control); } @Override public void close() { dataLine.close(); } @Override public void open() throws LineUnavailableException { throw new IllegalArgumentException("illegal call to open() in interface Clip"); } @Override public Line.Info getLineInfo() { return dataLine.getLineInfo(); } /** Determines the single largest sample size of all channels of the current clip. This can be handy for determining a fraction to scal visual representations. @return Double between 0 & 1 representing the maximum signal level of any channel. */ public double getLargestSampleSize() { int largest = 0; int current; boolean signed = (format.getEncoding()==AudioFormat.Encoding.PCM_SIGNED); int bitDepth = format.getSampleSizeInBits(); boolean bigEndian = format.isBigEndian(); int samples = audioData.length*8/bitDepth; if (signed) { if (bitDepth/8==2) { if (bigEndian) { for (int cc = 0; cc < samples; cc++) { current = (audioData[cc*2]*256 + (audioData[cc*2+1] & 0xFF)); if (Math.abs(current)>largest) { largest = Math.abs(current); } } } else { for (int cc = 0; cc < samples; cc++) { current = (audioData[cc*2+1]*256 + (audioData[cc*2] & 0xFF)); if (Math.abs(current)>largest) { largest = Math.abs(current); } } } } else { for (int cc = 0; cc < samples; cc++) { current = (audioData[cc] & 0xFF); if (Math.abs(current)>largest) { largest = Math.abs(current); } } } } else { if (bitDepth/8==2) { if (bigEndian) { for (int cc = 0; cc < samples; cc++) { current = (audioData[cc*2]*256 + (audioData[cc*2+1] - 0x80)); if (Math.abs(current)>largest) { largest = Math.abs(current); } } } else { for (int cc = 0; cc < samples; cc++) { current = (audioData[cc*2+1]*256 + (audioData[cc*2] - 0x80)); if (Math.abs(current)>largest) { largest = Math.abs(current); } } } } else { for (int cc = 0; cc < samples; cc++) { if ( audioData[cc]>0 ) { current = (audioData[cc] - 0x80); if (Math.abs(current)>largest) { largest = Math.abs(current); } } else { current = (audioData[cc] + 0x80); if (Math.abs(current)>largest) { largest = Math.abs(current); } } } } } // audioData logger.log(Level.FINEST, "Max signal level: " + (double)largest/(Math.pow(2, bitDepth-1))); return (double)largest/(Math.pow(2, bitDepth-1)); } }

    更多推荐

    播放使用Java API音MP3

    本文发布于:2023-07-11 08:58:43,感谢您对本站的认可!
    本文链接:https://www.elefans.com/category/jswz/34/1093505.html
    版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
    本文标签:Java   API

    发布评论

    评论列表 (有 0 条评论)
    草根站长

    >www.elefans.com

    编程频道|电子爱好者 - 技术资讯及电子产品介绍!