起因: 不小新把记录了公司服务器IP,账号,密码的文件提交到了git方法:
git reset --hard <commit_id>
git push origin HEAD --force
其他:
根据–soft –mixed –hard,会对working tree和index和HEAD进行重置:
git reset –mixed:此为默认方式,不带任何参数的git reset,即时这种方式,它回退到某个版本,只保留源码,回退commit和index信息
git reset –soft:回退到某个版本,只回退了commit的信息,不会恢复到index file一级。如果还要提交,直接commit即可
git reset –hard:彻底回退到某个版本,本地的源码也会变为上一个版本的内容
HEAD 最近一个提交
HEAD^ 上一次
<commit_id> 每次commit的SHA1值. 可以用git log 看到,也可以在页面上commit标签页里找到.
commit合并:
http://www.douban.com/note/318248317/
posted @
2018-01-09 10:51 lfc 阅读(223) |
评论 (0) |
编辑 收藏
参考以下帖子:
http://blog.csdn.net/vblittleboy/article/details/20121341作了一下改动,修正了bug,提高适用性:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <libavutil/opt.h>
#include <libavutil/mathematics.h>
#include <libavformat/avformat.h>
FILE * fp_in = NULL;
FILE * fp_out = NULL;
static int frame_count;
int main(int argc, char **argv)
{
int ret;
AVCodec *audio_codec;
AVCodecContext *c;
AVFrame *frame;
AVPacket pkt = { 0 }; // data and size must be 0;
int got_output;
/* Initialize libavcodec, and register all codecs and formats. */
av_register_all();
avcodec_register_all();
//avdevice_register_all();
audio_codec = avcodec_find_encoder(AV_CODEC_ID_AAC);
c = avcodec_alloc_context3(audio_codec);
// c->strict_std_compliance =FF_COMPLIANCE_EXPERIMENTAL;
c->codec_id = AV_CODEC_ID_AAC;
c->sample_fmt = AV_SAMPLE_FMT_S16;
c->sample_rate = 44100;
c->channels = 2;
c->channel_layout = AV_CH_LAYOUT_STEREO;
c->bit_rate = 64000;
/* open the codec */
ret = avcodec_open2(c, audio_codec, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
exit(1);
}
/* allocate and init a re-usable frame */
#if 0
frame = avcodec_alloc_frame();
#else
frame = av_frame_alloc();
#endif
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
frame->nb_samples = c->frame_size;
frame->format = c->sample_fmt;
frame->channels = c->channels;
frame->channel_layout = c->channel_layout;
#if 0
frame->linesize[0] = 4096;
frame->extended_data = frame->data[0] = av_malloc((size_t)frame->linesize[0]);
#else
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate an audio frame.\n");
exit(1);
}
printf("----nb_samples= %d, linesize= %d\n", frame->nb_samples, frame->linesize[0]);
#endif
av_init_packet(&pkt);
fp_in = fopen("in.wav","rb");
fp_out= fopen("out.aac","wb");
//printf("frame->nb_samples = %d\n",frame->nb_samples);
while(1)
{
frame_count++;
bzero(frame->data[0],frame->linesize[0]);
ret = fread(frame->data[0],frame->linesize[0],1,fp_in);
if(ret <= 0)
{
printf("read over !\n");
break;
}
ret = avcodec_encode_audio2(c, &pkt, frame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
exit(1);
}
if(got_output > 0)
{
//printf("pkt.size = %d\n",pkt.size);
fwrite(pkt.data,pkt.size,1,fp_out);
av_free_packet(&pkt);
}
#if 0
if(frame_count > 10)
{
printf("break @@@@@@@@@@@@\n");
break;
}
#endif
}
avcodec_close(c);
av_free(c);
#if 0
avcodec_free_frame(&frame);
#else
av_frame_free(&frame);
#endif
fclose(fp_in);
fclose(fp_out);
return 0;
}
另外,建议看一下ffmpeg自带的例程,很有参考价值(特别是需要用到重采样功能):
doc/examples/transcode_aac.c
注:
aac编码用到了libfdk_aac库,详细请参考:
http://trac.ffmpeg.org/wiki/Encode/AAC
posted @
2017-02-10 11:24 lfc 阅读(2004) |
评论 (0) |
编辑 收藏
摘要: 基础搞明白了,那么live555的RTSP服务器,又是如何创建、启动,如何和Source和Sink建立联系的呢?主程序中会调用类似下面的代码,创建RTSP服务器:Code highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/--> // C...
阅读全文
posted @
2017-02-08 16:47 lfc 阅读(711) |
评论 (0) |
编辑 收藏
以下只作为个人总结,只作记录用,如果想系统的分析live555,建议阅读以下帖子,或阅读源码:
http://blog.csdn.net/niu_gao/article/details/6906055一、概念live555类似于Gstreamer和DirectShow架构,分Source、Filter、Sink的概念,例如,测试程序testOnDemandRTSPServer中,流化H264的pipeline如下(通过H264VideoFileServerMediaSubsession自动构建):
【Source】ByteStreamFileSource->H264or5VideoStreamParser(MPEGVideoStreamParser)->H264VideoStreamFramer(H264or5VideoStreamFramer(MPEGVideoStreamFramer))
直接与Sink打交道的是H264VideoStreamFramer(通过H264VideoFileServerMediaSubsession的createNewStreamSource),其它Parser、FileSource是H264VideoStreamFramer自动构建(按需要)
【Sink】H264or5Fragmenter->H264VideoRTPSink(H264or5VideoRTPSink)->VideoRTPSink->MultiFramedRTPSink
H264or5Fragmenter与上面的H264or5VideoStreamFramer打交道,获取Source的Frame数据后分段成RTP包输出。
二、数据流动1、先来看数据的输入
1)首先,Sink下会创建缓冲,用于存放从Source获取的数据,存放缓冲的指针就是大家比较熟悉的fTo,Sink通过一系列的类,把指针传递下去,具体是:
H264or5Fragmenter->MPEGVideoStreamFramer->MPEGVideoStreamParser
最终从Sink传递到Parser中,相关的代码片段是:
H264or5Fragmenter::H264or5Fragmenter
{
fInputBuffer = new unsignedchar[fInputBufferSize];
}
void H264or5Fragmenter::doGetNextFrame()
{
fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,
afterGettingFrame, this,
FramedSource::handleClosure, this);
}
MPEGVideoStreamFramer::doGetNextFrame()
{
fParser->registerReadInterest(fTo, fMaxSize);
continueReadProcessing();
}
void MPEGVideoStreamParser::registerReadInterest(unsigned char* to,
unsigned maxSize) {
fStartOfFrame = fTo = fSavedTo = to;
fLimit = to + maxSize;
fNumTruncatedBytes = fSavedNumTruncatedBytes = 0;
}
2)或许你注意到,fTo还没传递到最终的Source(ByteStreamFileSource),那是因为ByteStreamFileSource是由Parser来访问的,而Parser本身会建立Buffer用于存储从ByteStreamFileSource读取的数据,再把分析出来的NAL写入fTo(来自Sink),所以你就可以理解为什么fTo只到达了Parser,而不到达ByteStreamFileSource了吧。
相关代码如下:
StreamParser::StreamParser
{
fBank[0] = new unsigned char[BANK_SIZE];
fBank[1] = new unsigned char[BANK_SIZE];
}
StreamParser::ensureValidBytes1
{
unsigned maxNumBytesToRead = BANK_SIZE - fTotNumValidBytes;
fInputSource->getNextFrame(&curBank()[fTotNumValidBytes],
maxNumBytesToRead,
afterGettingBytes, this,
onInputClosure, this);
}
unsigned H264or5VideoStreamParser::parse()
{
saveXBytes(Y);
}
class MPEGVideoStreamParser: public StreamParser
{
// Record "byte" in the current output frame:
void saveByte(u_int8_t byte) {
if (fTo >= fLimit) { // there's no space left
++fNumTruncatedBytes;
return;
}
*fTo++ = byte;
}
}
2、再来看数据的输出
1)上面输入的数据最终去到H264or5Fragmenter,这里说明一下:
H264or5Fragmenter还是FrameSource(在H264or5VideoRTSPSink.cpp内定义),用于连接H264VideoRTSPSink和H264VideoStreamFramer;H264or5Fragmenter的doGetNextFrame实现,会把从Source获取到的数据,按照RTP协议的要求进行分段,保存到Sink的fOutBuf中。
具体代码如下:
MultiFramedRTPSink::MultiFramedRTPSink(UsageEnvironment& env,
Groupsock* rtpGS,
unsigned char rtpPayloadType,
unsigned rtpTimestampFrequency,
char const* rtpPayloadFormatName,
unsigned numChannels)
: RTPSink(env, rtpGS, rtpPayloadType, rtpTimestampFrequency,
rtpPayloadFormatName, numChannels),
fOutBuf(NULL), fCurFragmentationOffset(0), fPreviousFrameEndedFragmentation(False),
fOnSendErrorFunc(NULL), fOnSendErrorData(NULL) {
setPacketSizes((RTP_PAYLOAD_PREFERRED_SIZE), (RTP_PAYLOAD_MAX_SIZE)); //sihid
}
void MultiFramedRTPSink::setPacketSizes(unsigned preferredPacketSize,
unsigned maxPacketSize) {
if (preferredPacketSize > maxPacketSize || preferredPacketSize == 0) return;
// sanity check
delete fOutBuf;
fOutBuf = new OutPacketBuffer(preferredPacketSize, maxPacketSize);
fOurMaxPacketSize = maxPacketSize; // save value, in case subclasses need it
}
Boolean MultiFramedRTPSink::continuePlaying() {
// Send the first packet.
// (This will also schedule any future sends.)
buildAndSendPacket(True);
return True;
}
void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) {
..
packFrame();
}
void MultiFramedRTPSink::packFrame() {
// Get the next frame.
..
// Normal case: we need to read a new frame from the source
if (fSource == NULL) return;
fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),
afterGettingFrame, this, ourHandleClosure, this);
}
void H264or5Fragmenter::doGetNextFrame() {
if (fNumValidDataBytes == 1) {
// We have no NAL unit data currently in the buffer. Read a new one:
fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1, //Sink调用Source的getNextFrame获取数据
afterGettingFrame, this,
FramedSource::handleClosure, this);
} else {
..
memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);
..
memmove(fTo, fInputBuffer, fMaxSize);
..
memmove(fTo, &fInputBuffer[fCurDataOffset-numExtraHeaderBytes], numBytesToSend);
..
}
看到了吧,数据的输入操作,其实是由Sink(MultiFramedRTPSink)发起的,当Sink需要获取数据时,通过调用Source的getNextFrame操作(具体由Source的doGetNextFrame操作来实现),经过一系列的类操作(Source->Filter->Sink),获取到Sink想要的数据。
2)到目前为止,终于可以构建出完整的Pipeline了:ByteStreamFileSource->H264or5VideoStreamParser(MPEGVideoStreamParser)->H264VideoStreamFramer(H264or5VideoStreamFramer)->H264or5Fragmenter->H264VideoRTPSink(H264or5VideoRTPSink)->VideoRTPSink->MultiFramedRTPSink
三、设计思想1、对于Buffer
上面的Pipeline中,有几处Buffer,对于实时性要求比较高的应用,有必要理清buffer的数量和控制buffer的大小
1)StreamParser会产生buffer,大小是BANK_SIZE(150000),那是因为StreamParser的前端是无格式的ByteStream,后面是完整的一帧数据(NAL),需要通过Parser来处理;
2)H264or5Fragmenter会产生buffer,用于StreamParser存放分析之后的数据(NAL),并生成后端RTPSink需要的RTP包;
3)MultiFramedRTPSink会产生buffer,用于Fragmenter存放分段之后的数据(RTP),以供RTSP服务器使用;
2、对于fTo
fTo,顾名思义,就是To的buffer指针,也就是后端提供的buffer指针;那fTo是在什么时候赋值的呢,答案在这里:
void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
afterGettingFunc* afterGettingFunc,
void* afterGettingClientData,
onCloseFunc* onCloseFunc,
void* onCloseClientData) {
// Make sure we're not already being read:
if (fIsCurrentlyAwaitingData) {
envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
envir().internalError();
}
fTo = to; //把Sink的bufer赋给Source的fTo sihid
fMaxSize = maxSize; //设置FrameSource的MaxSize sihid
fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
fAfterGettingFunc = afterGettingFunc;
fAfterGettingClientData = afterGettingClientData;
fOnCloseFunc = onCloseFunc;
fOnCloseClientData = onCloseClientData;
fIsCurrentlyAwaitingData = True;
doGetNextFrame();
}
另外,MPEGVideoStreamFramer还会通过其它方式传递fTo给MPEGVideoStreamParser:
MPEGVideoStreamFramer::doGetNextFrame()
{
fParser->registerReadInterest(fTo, fMaxSize);
continueReadProcessing();
}
void MPEGVideoStreamParser::registerReadInterest(unsigned char* to,unsigned maxSize) {
fStartOfFrame = fTo = fSavedTo = to;
fLimit = to + maxSize;
fNumTruncatedBytes = fSavedNumTruncatedBytes = 0;
}
原因是MPEGVideoStreamParser(StreamParser)不是FrameSource,所以只能提供另外的API来获取fTo
当Sink需要数据时,会调用Source的getNextFrame方法,同时把自身的buffer通过参数(to)传递给Source,保存在Source的fTo中。
3、对于fSource
Boolean MediaSink::startPlaying(MediaSource& source,
afterPlayingFunc* afterFunc,
void* afterClientData) {
// Make sure we're not already being played:
if (fSource != NULL) {
envir().setResultMsg("This sink is already being played");
return False;
}
// Make sure our source is compatible:
if (!sourceIsCompatibleWithUs(source)) {
envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");
return False;
}
fSource = (FramedSource*)&source;
fAfterFunc = afterFunc;
fAfterClientData = afterClientData;
return continuePlaying();
}
Sink的fSource,在startPlaying中被设置为createNewStreamSource返回的H264VideoStreamFramer。
随后在continuePlaying(H264or5VideoRTPSink实现)方法中被修改为H264or5Fragmenter,同时通过reassignInputSource函数记录H264VideoStreamFramer到H264or5Fragmenter的fInputSource成员,这样Sink和H264VideoStreamFramer之间隔着H264or5Fragmenter。
Boolean H264or5VideoRTPSink::continuePlaying() {
// First, check whether we have a 'fragmenter' class set up yet.
// If not, create it now:
if (fOurFragmenter == NULL) {
fOurFragmenter = new H264or5Fragmenter(fHNumber, envir(), fSource, OutPacketBuffer::maxSize, ourMaxPacketSize() - 12/*RTP hdr size*/);
} else {
fOurFragmenter->reassignInputSource(fSource);
}
fSource = fOurFragmenter;
// Then call the parent class's implementation: return MultiFramedRTPSink::continuePlaying();
}
class FramedFilter: public FramedSource {
public:
FramedSource* inputSource() const { return fInputSource; }
void reassignInputSource(FramedSource* newInputSource) { fInputSource = newInputSource; }
// Call before destruction if you want to prevent the destructor from closing the input source
void detachInputSource();
protected:
FramedFilter(UsageEnvironment& env, FramedSource* inputSource);
// abstract base class
virtual ~FramedFilter();
protected:
// Redefined virtual functions (with default 'null' implementations):
virtual char const* MIMEtype() const;
virtual void getAttributes() const;
virtual void doStopGettingFrames();
protected:
FramedSource* fInputSource; //输入文件对应的Source
};
为什么要这么做?因为RTPSink需要的数据,需要在H264VideoStreamFramer输出数据的基础上,通过H264or5Fragmenter封装成RTP包,所以多出了H264or5Fragmenter这个东西(类似于前面的H264or5VideoStreamParser)。
4、对于fInputSource
class FramedFilter下定义,被H264or5VideoStreamFramer和H264or5Fragmenter所继承。可是,fInputSource在这两个类下,指向的内容是不同的。
1)H264or5VideoStreamFramer下指向ByteStreamFileSource,具体见以下代码:
FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) {
estBitrate = 500; // kbps, estimate
// Create the video source:
ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(), fFileName);
if (fileSource == NULL) return NULL;
fFileSize = fileSource->fileSize();
// Create a framer for the Video Elementary Stream:
return H264VideoStreamFramer::createNew(envir(), fileSource); //把ByteStreamFileSource记录到H264VideoStreamFramer的fInputSource
}
H264or5VideoStreamFramer
::H264or5VideoStreamFramer(int hNumber, UsageEnvironment& env, FramedSource* inputSource,
Boolean createParser, Boolean includeStartCodeInOutput)
: MPEGVideoStreamFramer(env, inputSource),
fHNumber(hNumber),
fLastSeenVPS(NULL), fLastSeenVPSSize(0),
fLastSeenSPS(NULL), fLastSeenSPSSize(0),
fLastSeenPPS(NULL), fLastSeenPPSSize(0) {
.
} MPEGVideoStreamFramer::MPEGVideoStreamFramer(UsageEnvironment& env,
FramedSource* inputSource)
: FramedFilter(env, inputSource),
fFrameRate(0.0) /* until we learn otherwise */,
fParser(NULL) {
reset();
}
2)H264or5Fragmenter下指向H264VideoStreamFramer,具体见以下代码:
Boolean H264or5VideoRTPSink::continuePlaying() {
// First, check whether we have a 'fragmenter' class set up yet.
// If not, create it now:
if (fOurFragmenter == NULL) {
fOurFragmenter = new H264or5Fragmenter(fHNumber, envir(), fSource, OutPacketBuffer::maxSize, //OutPacketBuffer::maxSize决定了fInputBufferSize. sihid
ourMaxPacketSize() - 12/*RTP hdr size*/);
} else {
fOurFragmenter->reassignInputSource(fSource); //把fSource(对应H264VideoStreamFramer)保存到fOurFragmenter(对应H264or5Fragmenter)的fInputSource
}
fSource = fOurFragmenter;
// Then call the parent class's implementation:
return MultiFramedRTPSink::continuePlaying();
}
posted @
2017-01-20 12:15 lfc 阅读(2304) |
评论 (0) |
编辑 收藏
【libxml2-2.9.2】
./configure --host=arm-linux-gnueabi --prefix=/home/luofc/work/tools/gcc-linaro-arm-linux-gnueabi-4.6.3-2012.02-20120201_linux/arm-linux-gnueabi --with-python=/home/luofc/work/tools/libxml2-2.9.2/python
make;make install
【Python-2.7.3】
patch -p1 < ../Python-2.7.3-xcompile.patch
./configure --host=arm-linux-gnueabi --prefix=/home/luofc/python
make HOSTPYTHON=./hostpython HOSTPGEN=./Parser/hostpgen BLDSHARED="arm-linux-gnueabi-gcc -shared" CROSS_COMPILE=arm-linux-gnueabi- CROSS_COMPILE_TARGET=yes
make;make install
【libplist】
PKG_CONFIG_PATH=/home/luofc/work/tools/gcc-linaro-arm-linux-gnueabi-4.6.3-2012.02-20120201_linux/arm-linux-gnueabi/lib/pkgconfig ./configure --host=arm-linux-gnueabi --prefix=/home/luofc/libimobiledevice/libplist LDFLAGS="-L/home/luofc/python/lib"
make;make install
【libusbmuxd】
PKG_CONFIG_PATH=/home/luofc/work/tools/gcc-linaro-arm-linux-gnueabi-4.6.3-2012.02-20120201_linux/arm-linux-gnueabi/lib/pkgconfig:/home/luofc/libimobiledevice/libplist/lib/pkgconfig ./configure --host=arm-linux-gnueabi --prefix=/home/luofc/libimobiledevice/libusbmuxd
make;make install
【usbmuxd】
PKG_CONFIG_PATH=/home/luofc/work/tools/gcc-linaro-arm-linux-gnueabi-4.6.3-2012.02-20120201_linux/arm-linux-gnueabi/lib/pkgconfig:/home/luofc/libimobiledevice/libplist/lib/pkgconfig ./configure --host=arm-linux-gnueabi --prefix=/home/luofc/libimobiledevice/usbmuxd --without-preflight
make;make install
posted @
2016-07-14 10:07 lfc 阅读(1479) |
评论 (0) |
编辑 收藏
1.host-m4-1.4.15 In file includedfrom clean-temp.h:22:0,
from clean-temp.c:23:
./stdio.h:456:1:error: 'gets' undeclared here (not in a function)
_GL_WARN_ON_USE(gets, "gets is a security hole - use fgets instead");
解决方法: 参考链接:
http://www.civilnet.cn/talk/browse.php?topicno=78555,2楼.
找到:host-m4-1.4.15/lib/stdio.h,然后对stdio.h文件做出如下改动,必要时连同stdio.in.h一起修改:
- <span style="font-family:Arial;font-size:12px;"># Begin patch
- === modified file 'grub-core/gnulib/stdio.in.h'
- --- grub-core/gnulib/stdio.in.h 2010-09-20 10:35:33 +0000
- +++ grub-core/gnulib/stdio.in.h 2012-07-04 15:18:15 +0000
- @@ -140,8 +140,10 @@
- /* It is very rare that the developer ever has full control of stdin,
- so any use of gets warrants an unconditional warning. Assume it is
- always declared, since it is required by C89. */
- +#if defined gets
- #undef gets
- _GL_WARN_ON_USE (gets, "gets is a security hole - use fgets instead");
- +#endif
2.host-autoconf-2.65
conftest.c:14625:must be after `@defmac' to use `@defmacx'
make[3]: ***[autoconf.info] Error 1
make[3]: Leavingdirectory`//opt/Android/a23androidSRC/lichee/out/linux/common/buildroot/build/host-autoconf-2.65/doc'
make[2]: ***[install-recursive] Error 1
make[2]: Leavingdirectory`/opt/Android/a23androidSRC/lichee/out/linux/common/buildroot/build/host-autoconf-2.65'
make[1]: ***[install] Error 2
make[1]: Leavingdirectory`/opt/Android/a23androidSRC/lichee/out/linux/common/buildroot/build/host-autoconf-2.65'
make: ***[/opt/Android/a23androidSRC/lichee/out/linux/common/buildroot/build/host-autoconf-2.65/.stamp_host_installed]Error 2
解决方法如下:
参考链接:
http://gnu-autoconf.7623.n7.nabble.com/compile-error-conftest-c-14625-must-be-after-defmac-to-use-defmacx-td18843.html
2楼有个补丁文件:
- --- autoconf-2.65/doc/autoconf.texi 2009-11-05 10:42:15.000000000 +0800
- +++ autoconf-2.65/doc/autoconf.texi.new 2013-05-28 05:41:09.243770263 +0800
- @@ -15,7 +15,7 @@
- @c The ARG is an optional argument. To be used for macro arguments in
- @c their documentation (@defmac).
- @macro ovar{varname}
- -@r{[}@var{\varname\}@r{]}@c
- +@r{[}@var{\varname\}@r{]}
- @end macro
-
- @c @dvar(ARG, DEFAULT)
- @@ -23,7 +23,7 @@
- @c The ARG is an optional argument, defaulting to DEFAULT. To be used
- @c for macro arguments in their documentation (@defmac).
- @macro dvar{varname, default}
- -@r{[}@var{\varname\} = @samp{\default\}@r{]}@c
- +@r{[}@var{\varname\} = @samp{\default\}@r{]}
- @end macro
-
- @c Handling the indexes with Texinfo yields several different problems.
根据这个补丁文件修改即可,直接修改源代码包,下次编译就不会再提示这个错误了。
3.host-makedevs
/opt/Android/a23androidSRC/lichee/out/linux/common/buildroot/build/host-makedevs/makedevs.c:374:6: error: variable ‘ret’ set but not used [-Werror=unused-but-set-variable]
int ret = EXIT_SUCCESS;
^
cc1: all warnings being treated as errors
直接修改makedevs.c文件:
最后一行,return 0;
修改为:return ret;
源代码位置:./buildroot/package/makedevs/makedevs.c
posted @
2016-06-23 16:48 lfc 阅读(678) |
评论 (0) |
编辑 收藏
摘要: Code highlighting produced by Actipro CodeHighlighter (freeware)http://www.CodeHighlighter.com/-->static int mpegts_read_packet(AVFormatContext *s, AVPacket *pkt){xxxx &n...
阅读全文
posted @
2016-05-18 11:30 lfc 阅读(734) |
评论 (0) |
编辑 收藏
Android中的Looper类,是用来封装消息循环和消息队列的一个类,用于在android线程中进行消息处理。handler其实可以看做是一个工具类,用来向消息队列中插入消息的。
(1) Looper类用来为一个线程开启一个消息循环。 默认情况下android中新诞生的线程是没有开启消息循环的。(主线程除外,主线程系统会自动为其创建Looper对象,开启消息循环。) Looper对象通过MessageQueue来存放消息和事件。一个线程只能有一个Looper,对应一个MessageQueue。
(2) 通常是通过Handler对象来与Looper进行交互的。Handler可看做是Looper的一个接口,用来向指定的Looper发送消息及定义处理 方法。 默认情况下Handler会与其被定义时所在线程的Looper绑定,比如,Handler在主线程中定义,那么它是与主线程的Looper绑定。 mainHandler = new Handler() 等价于new Handler(Looper.myLooper()). Looper.myLooper():获取当前进程的looper对象,类似的 Looper.getMainLooper() 用于获取主线程的Looper对象。
(3) 在非主线程中直接new Handler() 会报如下的错误: E/AndroidRuntime( 6173): Uncaught handler: thread Thread-8 exiting due to uncaught exception E/AndroidRuntime( 6173): java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare() 原因是非主线程中默认没有创建Looper对象,需要先调用Looper.prepare()启用Looper。
(4) Looper.loop(); 让Looper开始工作,从消息队列里取消息,处理消息。
注意:写在Looper.loop()之后的代码不会被执行,这个函数内部应该是一个循环,当调用mHandler.getLooper().quit()后,loop才会中止,其后的代码才能得以运行。
(5) 基于以上知识,可实现主线程给子线程(非主线程)发送消息。
把下面例子中的mHandler声明成类成员,在主线程通过mHandler发送消息即可。
Android官方文档中Looper的介绍: Class used to run a message loop for a thread. Threads by default do not have a message loop associated with them; to create one, call prepare() in the thread that is to run the loop, and then loop() to have it process messages until the loop is stopped.
Most interaction with a message loop is through the Handler class.
This is a typical example of the implementation of a Looper thread, using the separation of prepare() and loop() to create an initial Handler to communicate with the Looper.
1 class LooperThread extends Thread
2 {
3 public Handler mHandler;
4 public void run()
5 {
6 Looper.prepare();
7 mHandler = new Handler()
8 {
9 public void handleMessage(Message msg)
10 {
11 // process incoming messages here
12 }
13 };
14 Looper.loop();
15 }
如果线程中使用Looper.prepare()和Looper.loop()创建了消息队列就可以让消息处理在
该线程中完成。
android HandlerThread使用小例
之前研究过handler 和 looper 消息队列,不过android里的handler不是另外开启线程来执行的,还是在主UI线程中,如果想另启线程的话需要用到HandlerThread 来实现。在使用HandlerThread的时候需要实现CallBack接口以重写handlerMessage方法,在handlerMessage 方法中来处理自己的逻辑。下来给出一个小例子程序。
layout文件很简单,就一个按钮来启动HanlderTread线程
1 <?xml version="1.0" encoding="utf-8"?>
2 <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
3 android:layout_width="fill_parent"
4 android:layout_height="fill_parent"
5 android:orientation="vertical" >
6
7 <TextView
8 android:layout_width="fill_parent"
9 android:layout_height="wrap_content"
10 android:text="@string/hello" />
11
12 <Button
13 android:id="@+id/handlerThreadBtn"
14 android:layout_width="wrap_content"
15 android:layout_height="wrap_content"
16 android:text="startHandlerThread" />
17
18 </LinearLayout>
Activity代码如下:
1 package com.tayue;
2
3 import android.app.Activity;
4 import android.os.Bundle;
5 import android.os.Handler;
6 import android.os.Handler.Callback;
7 import android.os.HandlerThread;
8 import android.os.Message;
9 import android.view.View;
10 import android.view.View.OnClickListener;
11 import android.widget.Button;
12 /**
13 *
14 * @author xionglei
15 *
16 */
17 public class TestHandlerActivity extends Activity implements OnClickListener{
18
19 public Button handlerThreadBTN;
20 MyHandlerThread handlerThread;
21 Handler handler;
22
23 /** Called when the activity is first created. */
24 @Override
25 public void onCreate(Bundle savedInstanceState) {
26 super.onCreate(savedInstanceState);
27 //打印UI线程的名称
28 System.out.println("onCreate CurrentThread = " + Thread.currentThread().getName());
29
30 setContentView(R.layout.main);
31
32 handlerThreadBTN = (Button) findViewById(R.id.handlerThreadBtn);
33 handlerThreadBTN.setOnClickListener(this);
34
35 handlerThread = new MyHandlerThread("myHanler");
36 handlerThread.start();
37 //注意: 这里必须用到handler的这个构造器,因为需要把callback传进去,从而使自己的HandlerThread的handlerMessage来替换掉Handler原生的handlerThread
38 handler = new Handler(handlerThread.getLooper(), handlerThread);
39 }
40
41 @Override
42 public void onClick(View v) {
43 //点击按钮后来开启线程
44 handler.sendEmptyMessage(1);
45 }
46
47 private class MyHandlerThread extends HandlerThread implements Callback {
48
49 public MyHandlerThread(String name) {
50 super(name);
51 }
52
53 @Override
54 public boolean handleMessage(Message msg) {
55 //打印线程的名称
56 System.out.println(" handleMessage CurrentThread = " + Thread.currentThread().getName());
57 return true;
58 }
59 }
60 }
点击按钮,打印的日志如下(这里点击了3次) 07-06 09:32:48.776: I/System.out(780): onCreate CurrentThread = main 07-06 09:32:55.076: I/System.out(780): handleMessage CurrentThread = myHanler 07-06 09:32:58.669: I/System.out(780): handleMessage CurrentThread = myHanler 07-06 09:33:03.476: I/System.out(780): handleMessage CurrentThread = myHanler
HandlerThread就这么简单。
当然 android自己也有异步线程的handler,就是AsyncTask,这个类就是封装了HandlerThread 和handler来实现异步多线程的操作的。
同样可以这样使用:
1 private boolean iscancel = false; //用户手动取消登录的标志位
2
3 handlerThread = new HandlerThread("myHandlerThread");
4 handlerThread.start();
5 handler = new MyHandler(handlerThread.getLooper());
6 // 将要执行的线程对象添加到线程队列中
7 handler.post(new Runnable() {
8 @Override
9 public void run() {
10 Message message = handler.obtainMessage();
11 UserBean user = Bbs.getInstance().Login(username, password);//耗时任务
12 Bundle b = new Bundle();
13 b.putSerializable("user", user);
14 message.setData(b);
15 message.sendToTarget(); //或使用 handler.sendMessage(message);
16 }
17 });
18 class MyHandler extends Handler {
19
20 public MyHandler(Looper looper) {
21 super(looper);
22 }
23
24 @Override
25 public void handleMessage(Message msg) {
26 if(iscancel == false){
27 // 操作UI线程的代码
28 Bundle b = msg.getData();
29 UserBean user = (UserBean)b.get("user");
30
31 }
32 }
33 }
posted @
2016-05-04 08:58 lfc 阅读(537) |
评论 (0) |
编辑 收藏
posted @
2016-04-19 16:41 lfc 阅读(267) |
评论 (0) |
编辑 收藏
posted @
2016-03-31 14:52 lfc 阅读(344) |
评论 (0) |
编辑 收藏